id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
fb5ce11bfd74e9d7c322444b006a27f2ff32a0cf | fb5ce11bfd74e9d7c322444b006a27f2ff32a0cf_0 | Q: What is task success rate achieved?
Text: Introduction
A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.
In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes.
Introduction ::: Problem Statement:
In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\mathbf {\pi }$ from a given set of demonstrations ${\cal D}=\lbrace \mathbf {d}^0,.., \mathbf {d}^m\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\mathbf {s}$. Given this information, our overall objective is to learn a policy $\mathbf {\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\mathbf {\pi }(\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account.
Background
A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.
Our work is most closely related to the framework introduced in BIBREF20, which also focuses on the symbol grounding problem. More specifically, the work in BIBREF20 aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that allow the robot to reassemble motions shown during training.
Multimodal Policy Generation via Imitation
We motivate our approach with a simple example: consider a binning task in which a 6 DOF robot has to drop an object into one of several differently shaped and colored bowls on a table. To teach this task, the human demonstrator does not only provide a kinesthetic demonstration of the desired trajectory, but also a verbal command, e.g., “Move towards the blue bowl” to the robot. In this example, the trajectory generation would have to be conditioned on the blue bowl's position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities in order to make best usage of contextual information for better generalization and disambiguation.
Figure FIGREF2 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description $\mathbf {s}$ and and image $$ and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the mpn. Rather than immediately producing control signals, the mpn will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives BIBREF1 as a lower-level controller for control signal generation.
In essence, our network can be divided into three parts. The first part, the semantic network, is used to create a task embedding $$ from the input sentence $$ and environment image $$. In a first step, the sentence $$ is tokenized and converted into a sentence matrix ${W} \in \mathbb {R}^{l_s \times l_w} = f_W()$ by utilizing pre-trained Glove word embeddings BIBREF21 where $l_s$ is the padded-fixed-size length of the sentence and $l_w$ is the size of the glove word vectors. To extract the relationships between the words, we use use multiple CNNs $_s = f_L()$ with filter size $n \times l_w$ for varying $n$, representing different $n$-gram sizes BIBREF22. The final representation is built by flattening the individual $n$-grams with max-pooling of size $(l_s - n_i + 1)\times l_w$ and concatenating the results before using a single perceptron to detect relationships between different $n$-grams. In order to combine the sentence embedding $_s$ with the image, it is concatenated as a fourth channel to the input image $$. The task embedding $$ is produced with three blocks of convolutional layers, composed of two regular convolutions, followed by a residual convolution BIBREF23 each.
In the second part, the policy translation network is used to generate the task parameters $\Theta \in \mathcal {R}^{o \times b}$ and $\in \mathcal {R}^{o}$ given a task embedding $$ where $o$ is the number of output dimensions and $b$ the number of basis functions in the DMP:
where $f_G()$ and $f_H()$ are multilayer-perceptrons that use $$ after being processed in a single perceptron with weight $_G$ and bias $_G$. These parameters are then used in the third part of the network, which is a DMP BIBREF0, allowing us leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs BIBREF5, BIBREF24, BIBREF25 to be incorporated to our framework.
Results
We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.
To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.
The generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. On the right side of Figure FIGREF4, the generated weights for the DMP are shown for two tasks in which the target is close and far away from the robot, located at different sides of the table, indicating the robots ability to generate differently shaped trajectories. The accuracy of the goal position can be seen in Figure FIGREF4(left) which shows another aspect of our approach: By using stochastic forward passes BIBREF26 the model can return an estimate for the validity of a requested task in addition to the predicted goal configuration. The figure shows that the goal position of a red bowl has a relatively small distribution independently of the used sentence or location on the table, where as an invalid target (green) produces a significantly larger distribution, indicating that the requested task may be invalid.
To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified.
Conclusion and Future Work
In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed an extensions of the method that allow us to obtain uncertainty information from the model by utilizing stochastic network outputs to get a distribution over the belief.
The modularity of our architecture allows us to easily exchange parts of the network. This can be utilized for transfer learning between different tasks in the semantic network or transfer between different robots by transferring the policy translation network to different robots in simulation, or to bridge the gap between simulation and reality. | 96-97.6% using the objects color or shape and 79% using shape alone |
1e2ffa065b640e912d6ed299ff713a12195e12c4 | 1e2ffa065b640e912d6ed299ff713a12195e12c4_0 | Q: What simulations are performed by the authors to validate their approach?
Text: Introduction
A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.
In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes.
Introduction ::: Problem Statement:
In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\mathbf {\pi }$ from a given set of demonstrations ${\cal D}=\lbrace \mathbf {d}^0,.., \mathbf {d}^m\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\mathbf {s}$. Given this information, our overall objective is to learn a policy $\mathbf {\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\mathbf {\pi }(\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account.
Background
A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.
Our work is most closely related to the framework introduced in BIBREF20, which also focuses on the symbol grounding problem. More specifically, the work in BIBREF20 aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that allow the robot to reassemble motions shown during training.
Multimodal Policy Generation via Imitation
We motivate our approach with a simple example: consider a binning task in which a 6 DOF robot has to drop an object into one of several differently shaped and colored bowls on a table. To teach this task, the human demonstrator does not only provide a kinesthetic demonstration of the desired trajectory, but also a verbal command, e.g., “Move towards the blue bowl” to the robot. In this example, the trajectory generation would have to be conditioned on the blue bowl's position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities in order to make best usage of contextual information for better generalization and disambiguation.
Figure FIGREF2 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description $\mathbf {s}$ and and image $$ and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the mpn. Rather than immediately producing control signals, the mpn will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives BIBREF1 as a lower-level controller for control signal generation.
In essence, our network can be divided into three parts. The first part, the semantic network, is used to create a task embedding $$ from the input sentence $$ and environment image $$. In a first step, the sentence $$ is tokenized and converted into a sentence matrix ${W} \in \mathbb {R}^{l_s \times l_w} = f_W()$ by utilizing pre-trained Glove word embeddings BIBREF21 where $l_s$ is the padded-fixed-size length of the sentence and $l_w$ is the size of the glove word vectors. To extract the relationships between the words, we use use multiple CNNs $_s = f_L()$ with filter size $n \times l_w$ for varying $n$, representing different $n$-gram sizes BIBREF22. The final representation is built by flattening the individual $n$-grams with max-pooling of size $(l_s - n_i + 1)\times l_w$ and concatenating the results before using a single perceptron to detect relationships between different $n$-grams. In order to combine the sentence embedding $_s$ with the image, it is concatenated as a fourth channel to the input image $$. The task embedding $$ is produced with three blocks of convolutional layers, composed of two regular convolutions, followed by a residual convolution BIBREF23 each.
In the second part, the policy translation network is used to generate the task parameters $\Theta \in \mathcal {R}^{o \times b}$ and $\in \mathcal {R}^{o}$ given a task embedding $$ where $o$ is the number of output dimensions and $b$ the number of basis functions in the DMP:
where $f_G()$ and $f_H()$ are multilayer-perceptrons that use $$ after being processed in a single perceptron with weight $_G$ and bias $_G$. These parameters are then used in the third part of the network, which is a DMP BIBREF0, allowing us leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs BIBREF5, BIBREF24, BIBREF25 to be incorporated to our framework.
Results
We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.
To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.
The generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. On the right side of Figure FIGREF4, the generated weights for the DMP are shown for two tasks in which the target is close and far away from the robot, located at different sides of the table, indicating the robots ability to generate differently shaped trajectories. The accuracy of the goal position can be seen in Figure FIGREF4(left) which shows another aspect of our approach: By using stochastic forward passes BIBREF26 the model can return an estimate for the validity of a requested task in addition to the predicted goal configuration. The figure shows that the goal position of a red bowl has a relatively small distribution independently of the used sentence or location on the table, where as an invalid target (green) produces a significantly larger distribution, indicating that the requested task may be invalid.
To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified.
Conclusion and Future Work
In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed an extensions of the method that allow us to obtain uncertainty information from the model by utilizing stochastic network outputs to get a distribution over the belief.
The modularity of our architecture allows us to easily exchange parts of the network. This can be utilized for transfer learning between different tasks in the semantic network or transfer between different robots by transferring the policy translation network to different robots in simulation, or to bridge the gap between simulation and reality. | a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command |
28b2a20779a78a34fb228333dc4b93fd572fda15 | 28b2a20779a78a34fb228333dc4b93fd572fda15_0 | Q: Does proposed end-to-end approach learn in reinforcement or supervised learning manner?
Text: Introduction
A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.
In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes.
Introduction ::: Problem Statement:
In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\mathbf {\pi }$ from a given set of demonstrations ${\cal D}=\lbrace \mathbf {d}^0,.., \mathbf {d}^m\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\mathbf {s}$. Given this information, our overall objective is to learn a policy $\mathbf {\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\mathbf {\pi }(\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account.
Background
A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.
Our work is most closely related to the framework introduced in BIBREF20, which also focuses on the symbol grounding problem. More specifically, the work in BIBREF20 aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that allow the robot to reassemble motions shown during training.
Multimodal Policy Generation via Imitation
We motivate our approach with a simple example: consider a binning task in which a 6 DOF robot has to drop an object into one of several differently shaped and colored bowls on a table. To teach this task, the human demonstrator does not only provide a kinesthetic demonstration of the desired trajectory, but also a verbal command, e.g., “Move towards the blue bowl” to the robot. In this example, the trajectory generation would have to be conditioned on the blue bowl's position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities in order to make best usage of contextual information for better generalization and disambiguation.
Figure FIGREF2 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description $\mathbf {s}$ and and image $$ and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the mpn. Rather than immediately producing control signals, the mpn will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives BIBREF1 as a lower-level controller for control signal generation.
In essence, our network can be divided into three parts. The first part, the semantic network, is used to create a task embedding $$ from the input sentence $$ and environment image $$. In a first step, the sentence $$ is tokenized and converted into a sentence matrix ${W} \in \mathbb {R}^{l_s \times l_w} = f_W()$ by utilizing pre-trained Glove word embeddings BIBREF21 where $l_s$ is the padded-fixed-size length of the sentence and $l_w$ is the size of the glove word vectors. To extract the relationships between the words, we use use multiple CNNs $_s = f_L()$ with filter size $n \times l_w$ for varying $n$, representing different $n$-gram sizes BIBREF22. The final representation is built by flattening the individual $n$-grams with max-pooling of size $(l_s - n_i + 1)\times l_w$ and concatenating the results before using a single perceptron to detect relationships between different $n$-grams. In order to combine the sentence embedding $_s$ with the image, it is concatenated as a fourth channel to the input image $$. The task embedding $$ is produced with three blocks of convolutional layers, composed of two regular convolutions, followed by a residual convolution BIBREF23 each.
In the second part, the policy translation network is used to generate the task parameters $\Theta \in \mathcal {R}^{o \times b}$ and $\in \mathcal {R}^{o}$ given a task embedding $$ where $o$ is the number of output dimensions and $b$ the number of basis functions in the DMP:
where $f_G()$ and $f_H()$ are multilayer-perceptrons that use $$ after being processed in a single perceptron with weight $_G$ and bias $_G$. These parameters are then used in the third part of the network, which is a DMP BIBREF0, allowing us leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs BIBREF5, BIBREF24, BIBREF25 to be incorporated to our framework.
Results
We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.
To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.
The generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. On the right side of Figure FIGREF4, the generated weights for the DMP are shown for two tasks in which the target is close and far away from the robot, located at different sides of the table, indicating the robots ability to generate differently shaped trajectories. The accuracy of the goal position can be seen in Figure FIGREF4(left) which shows another aspect of our approach: By using stochastic forward passes BIBREF26 the model can return an estimate for the validity of a requested task in addition to the predicted goal configuration. The figure shows that the goal position of a red bowl has a relatively small distribution independently of the used sentence or location on the table, where as an invalid target (green) produces a significantly larger distribution, indicating that the requested task may be invalid.
To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified.
Conclusion and Future Work
In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed an extensions of the method that allow us to obtain uncertainty information from the model by utilizing stochastic network outputs to get a distribution over the belief.
The modularity of our architecture allows us to easily exchange parts of the network. This can be utilized for transfer learning between different tasks in the semantic network or transfer between different robots by transferring the policy translation network to different robots in simulation, or to bridge the gap between simulation and reality. | supervised learning |
b367b823c5db4543ac421d0057b02f62ea16bf9f | b367b823c5db4543ac421d0057b02f62ea16bf9f_0 | Q: Are synonymous relation taken into account in the Japanese-Vietnamese task?
Text: Introduction
NMT systems have achieved better performance compared to statistical machine translation (SMT) systems in recent years not only on available data language pairs BIBREF1, BIBREF2, but also on low-resource language pairs BIBREF3, BIBREF4. Nevertheless, NMT still exists many challenges which have adverse effects on its effectiveness BIBREF0. One of these challenges is that NMT has biased tend in translating high-frequency words, thus words which have lower frequencies are often translated incorrectly. This challenge has also been confirmed again in BIBREF3, and they have proposed two strategies to tackle this problem with modifications on the model's output distribution: one for normalizing some matrices by fixing them to constants after several training epochs and another for adding a direct connection from source embeddings through a simple feed forward neural network (FFNN). These approaches increase the size and the training time of their NMT systems. In this work, we follow their second approach but simplify the computations by replacing FFNN with two single operations.
Despite above approaches can improve the prediction of rare words, however, NMT systems often use limited vocabularies in their sizes, from 30K to 80K most frequent words of the training data, in order to reduce computational complexity and the sizes of the models BIBREF5, BIBREF6, so the rare-word translation are still problematic in NMT. Even when we use a larger vocabulary, this situation still exists BIBREF7. A word which has not seen in the vocabulary of the input text (called unknown word) are presented by the $unk$ symbol in NMT systems. Inspired by alignments and phrase tables in phrase-based machine translation (SMT) as suggested by BIBREF8, BIBREF6 proposed to address OOV words using an annotated training corpus. They then used a dictionary generated from alignment model or maps between source and target words to determine the translations of $unks$ if translations are not found. BIBREF9 proposed to reduce unknown words using Gage's Byte Pair Encoding (BPE) algorithm BIBREF10, but NMT systems are less effective for low-resource language pairs due to the lack of data and also for other languages that sub-word are not the optimal translation unit. In this paper, we employ several techniques inspired by the works from NMT and the traditional SMT mentioned above. Instead of a loosely unsupervised approach, we suggest a supervised approach to solve this trouble using synonymous relation of word pairs from WordNet on Japanese$\rightarrow $Vietnamese and English$\rightarrow $Vietnamese systems. To leverage effectiveness of this relation in English, we transform variants of words in the source texts to their original forms by separating their affixes collected by hand.
Our contributes in this work are:
We release the state-of-the-art for Japanese-Vietnamese NMT systems.
We proposed the approach to deal with the rare word translation by integrating source embeddings to the attention component of NMT.
We present a supervised algorithm to reduce the number of unknown words for the English$\rightarrow $Vietnamese translation system.
We demonstrate the effectiveness of leveraging linguistic information from WordNet to alleviate the rare-word problem in NMT.
Neural Machine Translation
Our NMT system use a bidirectional recurrent neural network (biRNN) as an encoder and a single-directional RNN as a decoder with input feeding of BIBREF11 and the attention mechanism of BIBREF5. The Encoder's biRNN are constructed by two RNNs with the hidden units in the LSTM cell, one for forward and the other for backward of the source sentence $\mathbf {x}=(x_1, ...,x_n)$. Every word $x_i$ in sentence is first encoded into a continuous representation $E_s(x_i)$, called the source embedding. Then $\mathbf {x}$ is transformed into a fixed-length hidden vector $\mathbf {h}_i$ representing the sentence at the time step $i$, which called the annotation vector, combined by the states of forward $\overrightarrow{\mathbf {h}}_i$ and backward $\overleftarrow{\mathbf {h}}_i$:
$\overrightarrow{\mathbf {h}}_i=f(E_s(x_i),\overrightarrow{\mathbf {h}}_{i-1})$
$\overleftarrow{\mathbf {h}}_i=f(E_s(x_i),\overleftarrow{\mathbf {h}}_{i+1})$
The decoder generates the target sentence $\mathbf {y}={(y_1, ..., y_m)}$, and at the time step $j$, the predicted probability of the target word $y_j$ is estimated as follows:
where $\mathbf {z}_j$ is the output hidden states of the attention mechanism and computed by the previous output hidden states $\mathbf {z}_{j-1}$, the embedding of previous target word $E_t(y_{j-1})$ and the context $\mathbf {c}_j$:
$\mathbf {z}_j=g(E_t(y_{j-1}), \mathbf {z}_{j-1}, \mathbf {c}_j)$
The source context $\mathbf {c}_j$ is the weighted sum of the encoder's annotation vectors $\mathbf {h}_i$:
$\mathbf {c}_j=\sum ^n_{i=1}\alpha _{ij}\mathbf {h}_i$
where $\alpha _{ij}$ are the alignment weights, denoting the relevance between the current target word $y_j$ and all source annotation vectors $\mathbf {h}_i$.
Rare Word translation
In this section, we present the details about our approaches to overcome the rare word situation. While the first strategy augments the source context to translate low-frequency words, the remaining strategies reduce the number of OOV words in the vocabulary.
Rare Word translation ::: Low-frequency Word Translation
The attention mechanism in RNN-based NMT maps the target word into source context corresponding through the annotation vectors $\mathbf {h}_i$. In the recurrent hidden unit, $\mathbf {h}_i$ is computed from the previous state $\mathbf {h}_{t-1}$. Therefore, the information flow of the words in the source sentence may be diminished over time. This leads to the accuracy reduction when translating low-frequency words, since there is no direct connection between the target word and the source word. To alleviate the adverse impact of this problem, BIBREF3 combined the source embeddings with the predictive distribution over the output target word in several following steps:
Firstly, the weighted average vector of the source embeddings is computed as follows:
where $\alpha _j(e)$ are alignment weights in the attention component and $f_e = E_s(x)$, are the embeddings of the source words.
Then $l_j$ is transformed through one-hidden-layer FFNN with residual connection proposed by BIBREF12:
Finally, the output distribution over the target word is calculated by:
The matrices $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$ are trained together with other parameters of the NMT model.
This approach improves the performance of the NMT systems but introduces more computations as the model size increase due to the additional parameters $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$. We simplify this method by using the weighted average of source embeddings directly in the softmax output layer:
Our method does not learn any additional parameters. Instead, it requires the source embedding size to be compatible with the decoder's hidden states. With the additional information provided from the source embeddings, we achieve similar improvements compared to the more expensive method described in BIBREF3.
Rare Word translation ::: Reducing Unknown Words
In our previous experiments for English$\rightarrow $Vietnamese, BPE algorithm BIBREF9 applied to the source side does not significantly improves the systems despite it is able to reduce the number of unknown English words. We speculate that it might be due to the morphological differences between the source and the target languages (English and Vietnamese in this case). The unsupervised way of BPE while learning sub-words in English thus might be not explicit enough to provide the morphological information to the Vietnamese side. In this work, we would like to attempt a more explicit, supervised way. We collect 52 popular affixes (prefixes and suffixes) in English and then apply the separating affixes algorithm (called SAA) to reduce the number of unknown words as well as to force our NMT systems to learn better morphological mappings between two languages.
The main ideal of our SAA is to separate affixes of unknown words while ensuring that the rest of them still exists in the vocabulary. Let the vocabulary $V$ containing $K$ most frequency words from the training set $T1$, a set of prefixes $P$, a set of suffixes $S$, we call word $w^{\prime }$ is the rest of an unknown word or rare word $w$ after delimiting its affixes. We iteratively pick a $w$ from $N$ words (including unknown words and rare words) of the source text $T2$ to consider if $w$ starts with a prefix $p$ in $P$ or ends with a suffix $s$ in $S$, we then determine splitting its affixes if $w^{\prime }$ in $V$. A rare word in $V$ also can be separated its affixes if its frequency is less than the given threshold. We set this threshold by 2 in our experiments. Similarly to BPE approach, we also employ a pair of the special symbol $@$ for separating affixes from the word. Listing SECREF6 shows our SAA algorithm.
Rare Word translation ::: Dealing with OOV using WordNet
WordNet is a lexical database grouping words into sets which share some semantic relations. Its version for English is proposed for the first time by BIBREF13. It becomes a useful resource for many tasks of natural language processing BIBREF14, BIBREF15, BIBREF16. WordNet are available mainly for English and German, the version for other languages are being developed including some Asian languages in such as Japanese, Chinese, Indonesian and Vietnamese. Several works have employed WordNet in SMT systemsBIBREF17, BIBREF18 but to our knowledge, none of the work exploits the benefits of WordNet in order to ease the rare word problem in NMT. In this work, we propose the learning synonymous algorithm (called LSW) from the WordNet of English and Japanese to handle unknown words in our NMT systems.
In WordNet, synonymous words are organized in groups which are called synsets. Our aim is to replace an OOV word by its synonym which appears in the vocabulary of the translation system. From the training set of the source language $T1$, we extract the vocabulary $V$ in size of $K$ most frequent words. For each OOV word from $T1$, we learn its synonyms which exist in the $V$ from the WordNet $W$. The synonyms are then arranged in the descending order of their frequencies to facilitate selection of the $n$ best words which have the highest frequencies. The output file $C$ of the algorithm contains OOV words and its corresponding synonyms and then it is applied to the input text $T2$. We also utilize a frequency threshold for rare words in the same way as in SAA algorithm. In practice, we set this threshold as 0, meaning no words on $V$ is replaced by its synonym. If a source sentence has $m$ unknown words and each of them has $n$ best synonyms, it would generate $m^n$ sentences. Translation process allow us to select the best hypothesis based on their scores. Because of each word in the WordNet can belong to many synsets with different meanings, thus an inappropriate word can be placed in the current source context. We will solve this situation in the further works. Our systems only use 1-best synonym for each OOV word. Listing SECREF7 presents the LSW algorithm.
Experiments
We evaluate our approaches on the English-Vietnamese and the Japanese-Vietnamese translation systems. Translation performance is measured in BLEU BIBREF19 by the multi-BLEU scripts from Moses.
Experiments ::: Datasets
We consider two low-resource language pairs: Japanese-Vietnamese and English-Vietnamese. For Japanese-Vietnamese, we use the TED data provided by WIT3 BIBREF20 and compiled by BIBREF21. The training set includes 106758 sentence pairs, the validation and test sets are dev2010 (568 pairs) and tst2010 (1220 pairs). For English$\rightarrow $Vietnamese, we use the dataset from IWSLT 2015 BIBREF22 with around 133K sentence pairs for the training set, 1553 pairs in tst2012 as the validation and 1268 pairs in tst2013 as the test sets.
For LSW algorithm, we crawled pairs of synonymous words from Japanese-English WordNet and achieved 315850 pairs for English and 1419948 pairs for Japanese.
Experiments ::: Preprocessing
For English and Vietnamese, we tokenized the texts and then true-cased the tokenized texts using Moses script. We do not use any word segmentation tool for Vietnamese. For comparison purpose, Sennrich's BPE algorithm is applied for English texts. Following the same preprocessing steps for Japanese (JPBPE) in BIBREF21, we use KyTea BIBREF23 to tokenize texts and then apply BPE on those texts. The number of BPE merging operators are 50k for both Japanese and English.
Experiments ::: Systems and Training
We implement our NMT systems using OpenNMT-py framework BIBREF24 with the same settings as in BIBREF21 for our baseline systems. Our system are built with two hidden layers in both encoder and decoder, each layer has 512 hidden units. In the encoder, a BiLSTM architecture is used for each layer and in the decoder, each layer are basically an LSTM layer. The size of embedding layers in both source and target sides is also 512. Adam optimizer is used with the initial learning rate of $0.001$ and then we apply learning rate annealing. We train our systems for 16 epochs with the batch size of 32. Other parameters are the same as the default settings of OpenNMT-py.
We then modify the baseline architecture with the alternative proposed in Section SECREF5 in comparison to our baseline systems. All settings are the same as the baseline systems.
Experiments ::: Results
In this section, we show the effectiveness of our methods on two low-resource language pairs and compare them to the other works. The empirical results are shown in Table TABREF15 for Japanese-Vietnamese and in Table TABREF20 for English-Vietnamese. Note that, the Multi-BLEU is only measured in the Japanese$\rightarrow $Vietnamese direction and the standard BLEU points are written in brackets.
Experiments ::: Results ::: Japanese-Vietnamese Translation
We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.
Baseline Systems. We find that our translation systems which use Sennrich's BPE method for Japanese texts and do not use word segmentation for Vietnamese texts are neither better or insignificant differences compare to those systems used word segmentation in BIBREF21. Particularly, we obtained +0.38 BLEU points between (1) and (4) in the Japanese$\rightarrow $Vietnamese and -0.18 BLEU points between (1) and (3) in the Vietnamese$\rightarrow $Japanese.
Our Approaches. On the systems trained with the modified architecture mentioned in the section SECREF5, we obtained an improvements of +0.54 BLEU points in the Japanese$\rightarrow $Vietnamese and +0.42 BLEU points on the Vietnamese$\rightarrow $Japanese compared to the baseline systems.
Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21.
Experiments ::: Results ::: English-Vietnamese Translation
We examine the effect of all approaches presented in Section SECREF3 for our English-Vietnamese translation systems. Table TABREF20 summarizes those results and the scores from other systems BIBREF3, BIBREF25.
Baseline systems. After preprocessing data using Moses scripts, we train the systems of English$\leftrightarrow $Vietnamese on our baseline architecture. Our translation system obtained +0.82 BLEU points compared to BIBREF3 in the English$\rightarrow $Vietnamese and this is lower than the system of BIBREF25 with neural phrase-based translation architecture.
Our approaches. The datasets from the baseline systems are trained on our modified NMT architecture. The improvements can be found as +0.55 BLEU points between (1) and (2) in the English$\rightarrow $Vietnamese and +0.45 BLEU points (in tst2012) between (1) and (2) in the Vietnamese$\rightarrow $English.
For comparison purpose, English texts are split into sub-words using Sennrich's BPE methods. We observe that, the achieved BLEU points are lower Therefore, we then apply the SAA algorithm on the English texts from (2) in the English$\rightarrow $Vietnamese. The number of applied words are listed in Table TABREF21. The improvement in BLEU are +0.74 between (4) and (1).
Similarly to the Japanese$\rightarrow $Vietnamese system, we apply LSW algorithm on the English texts from (4) while selecting 1-best synonym for each OOV word. The number of replaced words on English texts are indicated in the Table TABREF22. Again, we obtained a bigger gain of +0.99 (+1.02) BLEU points in English$\rightarrow $Vietnamese direction. Compared to the most recent work BIBREF25, our system reports an improvement of +0.47 standard BLEU points on the same dataset.
We investigate some examples of translations generated by the English$\rightarrow $Vietnamese systems with our proposed methods in the Table TABREF23. The bold texts in red color present correct or approximate translations while the italic texts in gray color denote incorrect translations. The first example, we consider two words: presentation and the unknown word applauded. The word presentation is predicted correctly as Vietnamese"bài thuyết trình" in most cases when we combined source context through embeddings. The unknown word applauded which has not seen in the vocabulary is ignored in the first two cases (baseline and source embedding) but it is roughly translated as Vietnamese"hoan nghênh" in the SAA because it is separated into applaud and ed. In the second example, we observe the translations of the unknown word tryout, they are mistaken in the first three cases but in the LSW, it is predicted with a closer meaning as Vietnamese"bài kiểm tra" due to the replacement by its synonymous word as test.
Related Works
Addressing unknown words was mentioned early in the Statistical Machine Translation (SMT) systems. Some typical studies as: BIBREF26 proposed four techniques to overcome this situation by extend the morphology and spelling of words or using a bilingual dictionary or transliterating for names. These approaches are difficult when manipulate to different domains. BIBREF27 trained word embedding models to learn word similarity from monolingual data and an unknown word are then replaced by a its similar word. BIBREF28 used a linear model to learn maps between source and target spaces base on a small initial bilingual dictionary to find the translations of source words. However, in NMT, there are not so many works tackling this problem. BIBREF7 use a very large vocabulary to solve unknown words. BIBREF6 generate a dictionary from alignment data based on annotated corpus to decide the hypotheses of unknown words. BIBREF3 have introduced the solutions for dealing with the rare word problem, however, their models require more parameters, thus, decreasing the overall efficiency.
In another direction, BIBREF9 exploited the BPE algorithm to reduce number of unknown words in NMT and achieved significant efficiency on many language pairs. The second approach presented in this works follows this direction when instead of using an unsupervised method to split rare words and unknown words into sub-words that are able to translate, we use a supervised method. Our third approach using WordNet can be seen as a smoothing way, when we use the translations of the synonymous words to approximate the translation of an OOV word. Another work followed this direction is worth to mention is BIBREF29, when they use the morphological and semantic information as the factors of the words to help translating rare words.
Conclusion
In this study, we have proposed three difference strategies to handle rare words in NMT, in which the combination of methods brings significant improvements to the NMT systems on two low-resource language pairs. In future works, we will consider selecting some appropriate synonymous words for the source sentence from n-best synonymous words to further improve the performance of the NMT systems and leverage more unsupervised methods based on monolingual data to address rare word problem.
Acknowledgments
This work is supported by the project "Building a machine translation system to support translation of documents between Vietnamese and Japanese to help managers and businesses in Hanoi approach to Japanese market", No. TC.02-2016-03. | Yes |
84737d871bde8058d8033e496179f7daec31c2d3 | 84737d871bde8058d8033e496179f7daec31c2d3_0 | Q: Is the supervised morphological learner tested on Japanese?
Text: Introduction
NMT systems have achieved better performance compared to statistical machine translation (SMT) systems in recent years not only on available data language pairs BIBREF1, BIBREF2, but also on low-resource language pairs BIBREF3, BIBREF4. Nevertheless, NMT still exists many challenges which have adverse effects on its effectiveness BIBREF0. One of these challenges is that NMT has biased tend in translating high-frequency words, thus words which have lower frequencies are often translated incorrectly. This challenge has also been confirmed again in BIBREF3, and they have proposed two strategies to tackle this problem with modifications on the model's output distribution: one for normalizing some matrices by fixing them to constants after several training epochs and another for adding a direct connection from source embeddings through a simple feed forward neural network (FFNN). These approaches increase the size and the training time of their NMT systems. In this work, we follow their second approach but simplify the computations by replacing FFNN with two single operations.
Despite above approaches can improve the prediction of rare words, however, NMT systems often use limited vocabularies in their sizes, from 30K to 80K most frequent words of the training data, in order to reduce computational complexity and the sizes of the models BIBREF5, BIBREF6, so the rare-word translation are still problematic in NMT. Even when we use a larger vocabulary, this situation still exists BIBREF7. A word which has not seen in the vocabulary of the input text (called unknown word) are presented by the $unk$ symbol in NMT systems. Inspired by alignments and phrase tables in phrase-based machine translation (SMT) as suggested by BIBREF8, BIBREF6 proposed to address OOV words using an annotated training corpus. They then used a dictionary generated from alignment model or maps between source and target words to determine the translations of $unks$ if translations are not found. BIBREF9 proposed to reduce unknown words using Gage's Byte Pair Encoding (BPE) algorithm BIBREF10, but NMT systems are less effective for low-resource language pairs due to the lack of data and also for other languages that sub-word are not the optimal translation unit. In this paper, we employ several techniques inspired by the works from NMT and the traditional SMT mentioned above. Instead of a loosely unsupervised approach, we suggest a supervised approach to solve this trouble using synonymous relation of word pairs from WordNet on Japanese$\rightarrow $Vietnamese and English$\rightarrow $Vietnamese systems. To leverage effectiveness of this relation in English, we transform variants of words in the source texts to their original forms by separating their affixes collected by hand.
Our contributes in this work are:
We release the state-of-the-art for Japanese-Vietnamese NMT systems.
We proposed the approach to deal with the rare word translation by integrating source embeddings to the attention component of NMT.
We present a supervised algorithm to reduce the number of unknown words for the English$\rightarrow $Vietnamese translation system.
We demonstrate the effectiveness of leveraging linguistic information from WordNet to alleviate the rare-word problem in NMT.
Neural Machine Translation
Our NMT system use a bidirectional recurrent neural network (biRNN) as an encoder and a single-directional RNN as a decoder with input feeding of BIBREF11 and the attention mechanism of BIBREF5. The Encoder's biRNN are constructed by two RNNs with the hidden units in the LSTM cell, one for forward and the other for backward of the source sentence $\mathbf {x}=(x_1, ...,x_n)$. Every word $x_i$ in sentence is first encoded into a continuous representation $E_s(x_i)$, called the source embedding. Then $\mathbf {x}$ is transformed into a fixed-length hidden vector $\mathbf {h}_i$ representing the sentence at the time step $i$, which called the annotation vector, combined by the states of forward $\overrightarrow{\mathbf {h}}_i$ and backward $\overleftarrow{\mathbf {h}}_i$:
$\overrightarrow{\mathbf {h}}_i=f(E_s(x_i),\overrightarrow{\mathbf {h}}_{i-1})$
$\overleftarrow{\mathbf {h}}_i=f(E_s(x_i),\overleftarrow{\mathbf {h}}_{i+1})$
The decoder generates the target sentence $\mathbf {y}={(y_1, ..., y_m)}$, and at the time step $j$, the predicted probability of the target word $y_j$ is estimated as follows:
where $\mathbf {z}_j$ is the output hidden states of the attention mechanism and computed by the previous output hidden states $\mathbf {z}_{j-1}$, the embedding of previous target word $E_t(y_{j-1})$ and the context $\mathbf {c}_j$:
$\mathbf {z}_j=g(E_t(y_{j-1}), \mathbf {z}_{j-1}, \mathbf {c}_j)$
The source context $\mathbf {c}_j$ is the weighted sum of the encoder's annotation vectors $\mathbf {h}_i$:
$\mathbf {c}_j=\sum ^n_{i=1}\alpha _{ij}\mathbf {h}_i$
where $\alpha _{ij}$ are the alignment weights, denoting the relevance between the current target word $y_j$ and all source annotation vectors $\mathbf {h}_i$.
Rare Word translation
In this section, we present the details about our approaches to overcome the rare word situation. While the first strategy augments the source context to translate low-frequency words, the remaining strategies reduce the number of OOV words in the vocabulary.
Rare Word translation ::: Low-frequency Word Translation
The attention mechanism in RNN-based NMT maps the target word into source context corresponding through the annotation vectors $\mathbf {h}_i$. In the recurrent hidden unit, $\mathbf {h}_i$ is computed from the previous state $\mathbf {h}_{t-1}$. Therefore, the information flow of the words in the source sentence may be diminished over time. This leads to the accuracy reduction when translating low-frequency words, since there is no direct connection between the target word and the source word. To alleviate the adverse impact of this problem, BIBREF3 combined the source embeddings with the predictive distribution over the output target word in several following steps:
Firstly, the weighted average vector of the source embeddings is computed as follows:
where $\alpha _j(e)$ are alignment weights in the attention component and $f_e = E_s(x)$, are the embeddings of the source words.
Then $l_j$ is transformed through one-hidden-layer FFNN with residual connection proposed by BIBREF12:
Finally, the output distribution over the target word is calculated by:
The matrices $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$ are trained together with other parameters of the NMT model.
This approach improves the performance of the NMT systems but introduces more computations as the model size increase due to the additional parameters $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$. We simplify this method by using the weighted average of source embeddings directly in the softmax output layer:
Our method does not learn any additional parameters. Instead, it requires the source embedding size to be compatible with the decoder's hidden states. With the additional information provided from the source embeddings, we achieve similar improvements compared to the more expensive method described in BIBREF3.
Rare Word translation ::: Reducing Unknown Words
In our previous experiments for English$\rightarrow $Vietnamese, BPE algorithm BIBREF9 applied to the source side does not significantly improves the systems despite it is able to reduce the number of unknown English words. We speculate that it might be due to the morphological differences between the source and the target languages (English and Vietnamese in this case). The unsupervised way of BPE while learning sub-words in English thus might be not explicit enough to provide the morphological information to the Vietnamese side. In this work, we would like to attempt a more explicit, supervised way. We collect 52 popular affixes (prefixes and suffixes) in English and then apply the separating affixes algorithm (called SAA) to reduce the number of unknown words as well as to force our NMT systems to learn better morphological mappings between two languages.
The main ideal of our SAA is to separate affixes of unknown words while ensuring that the rest of them still exists in the vocabulary. Let the vocabulary $V$ containing $K$ most frequency words from the training set $T1$, a set of prefixes $P$, a set of suffixes $S$, we call word $w^{\prime }$ is the rest of an unknown word or rare word $w$ after delimiting its affixes. We iteratively pick a $w$ from $N$ words (including unknown words and rare words) of the source text $T2$ to consider if $w$ starts with a prefix $p$ in $P$ or ends with a suffix $s$ in $S$, we then determine splitting its affixes if $w^{\prime }$ in $V$. A rare word in $V$ also can be separated its affixes if its frequency is less than the given threshold. We set this threshold by 2 in our experiments. Similarly to BPE approach, we also employ a pair of the special symbol $@$ for separating affixes from the word. Listing SECREF6 shows our SAA algorithm.
Rare Word translation ::: Dealing with OOV using WordNet
WordNet is a lexical database grouping words into sets which share some semantic relations. Its version for English is proposed for the first time by BIBREF13. It becomes a useful resource for many tasks of natural language processing BIBREF14, BIBREF15, BIBREF16. WordNet are available mainly for English and German, the version for other languages are being developed including some Asian languages in such as Japanese, Chinese, Indonesian and Vietnamese. Several works have employed WordNet in SMT systemsBIBREF17, BIBREF18 but to our knowledge, none of the work exploits the benefits of WordNet in order to ease the rare word problem in NMT. In this work, we propose the learning synonymous algorithm (called LSW) from the WordNet of English and Japanese to handle unknown words in our NMT systems.
In WordNet, synonymous words are organized in groups which are called synsets. Our aim is to replace an OOV word by its synonym which appears in the vocabulary of the translation system. From the training set of the source language $T1$, we extract the vocabulary $V$ in size of $K$ most frequent words. For each OOV word from $T1$, we learn its synonyms which exist in the $V$ from the WordNet $W$. The synonyms are then arranged in the descending order of their frequencies to facilitate selection of the $n$ best words which have the highest frequencies. The output file $C$ of the algorithm contains OOV words and its corresponding synonyms and then it is applied to the input text $T2$. We also utilize a frequency threshold for rare words in the same way as in SAA algorithm. In practice, we set this threshold as 0, meaning no words on $V$ is replaced by its synonym. If a source sentence has $m$ unknown words and each of them has $n$ best synonyms, it would generate $m^n$ sentences. Translation process allow us to select the best hypothesis based on their scores. Because of each word in the WordNet can belong to many synsets with different meanings, thus an inappropriate word can be placed in the current source context. We will solve this situation in the further works. Our systems only use 1-best synonym for each OOV word. Listing SECREF7 presents the LSW algorithm.
Experiments
We evaluate our approaches on the English-Vietnamese and the Japanese-Vietnamese translation systems. Translation performance is measured in BLEU BIBREF19 by the multi-BLEU scripts from Moses.
Experiments ::: Datasets
We consider two low-resource language pairs: Japanese-Vietnamese and English-Vietnamese. For Japanese-Vietnamese, we use the TED data provided by WIT3 BIBREF20 and compiled by BIBREF21. The training set includes 106758 sentence pairs, the validation and test sets are dev2010 (568 pairs) and tst2010 (1220 pairs). For English$\rightarrow $Vietnamese, we use the dataset from IWSLT 2015 BIBREF22 with around 133K sentence pairs for the training set, 1553 pairs in tst2012 as the validation and 1268 pairs in tst2013 as the test sets.
For LSW algorithm, we crawled pairs of synonymous words from Japanese-English WordNet and achieved 315850 pairs for English and 1419948 pairs for Japanese.
Experiments ::: Preprocessing
For English and Vietnamese, we tokenized the texts and then true-cased the tokenized texts using Moses script. We do not use any word segmentation tool for Vietnamese. For comparison purpose, Sennrich's BPE algorithm is applied for English texts. Following the same preprocessing steps for Japanese (JPBPE) in BIBREF21, we use KyTea BIBREF23 to tokenize texts and then apply BPE on those texts. The number of BPE merging operators are 50k for both Japanese and English.
Experiments ::: Systems and Training
We implement our NMT systems using OpenNMT-py framework BIBREF24 with the same settings as in BIBREF21 for our baseline systems. Our system are built with two hidden layers in both encoder and decoder, each layer has 512 hidden units. In the encoder, a BiLSTM architecture is used for each layer and in the decoder, each layer are basically an LSTM layer. The size of embedding layers in both source and target sides is also 512. Adam optimizer is used with the initial learning rate of $0.001$ and then we apply learning rate annealing. We train our systems for 16 epochs with the batch size of 32. Other parameters are the same as the default settings of OpenNMT-py.
We then modify the baseline architecture with the alternative proposed in Section SECREF5 in comparison to our baseline systems. All settings are the same as the baseline systems.
Experiments ::: Results
In this section, we show the effectiveness of our methods on two low-resource language pairs and compare them to the other works. The empirical results are shown in Table TABREF15 for Japanese-Vietnamese and in Table TABREF20 for English-Vietnamese. Note that, the Multi-BLEU is only measured in the Japanese$\rightarrow $Vietnamese direction and the standard BLEU points are written in brackets.
Experiments ::: Results ::: Japanese-Vietnamese Translation
We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.
Baseline Systems. We find that our translation systems which use Sennrich's BPE method for Japanese texts and do not use word segmentation for Vietnamese texts are neither better or insignificant differences compare to those systems used word segmentation in BIBREF21. Particularly, we obtained +0.38 BLEU points between (1) and (4) in the Japanese$\rightarrow $Vietnamese and -0.18 BLEU points between (1) and (3) in the Vietnamese$\rightarrow $Japanese.
Our Approaches. On the systems trained with the modified architecture mentioned in the section SECREF5, we obtained an improvements of +0.54 BLEU points in the Japanese$\rightarrow $Vietnamese and +0.42 BLEU points on the Vietnamese$\rightarrow $Japanese compared to the baseline systems.
Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21.
Experiments ::: Results ::: English-Vietnamese Translation
We examine the effect of all approaches presented in Section SECREF3 for our English-Vietnamese translation systems. Table TABREF20 summarizes those results and the scores from other systems BIBREF3, BIBREF25.
Baseline systems. After preprocessing data using Moses scripts, we train the systems of English$\leftrightarrow $Vietnamese on our baseline architecture. Our translation system obtained +0.82 BLEU points compared to BIBREF3 in the English$\rightarrow $Vietnamese and this is lower than the system of BIBREF25 with neural phrase-based translation architecture.
Our approaches. The datasets from the baseline systems are trained on our modified NMT architecture. The improvements can be found as +0.55 BLEU points between (1) and (2) in the English$\rightarrow $Vietnamese and +0.45 BLEU points (in tst2012) between (1) and (2) in the Vietnamese$\rightarrow $English.
For comparison purpose, English texts are split into sub-words using Sennrich's BPE methods. We observe that, the achieved BLEU points are lower Therefore, we then apply the SAA algorithm on the English texts from (2) in the English$\rightarrow $Vietnamese. The number of applied words are listed in Table TABREF21. The improvement in BLEU are +0.74 between (4) and (1).
Similarly to the Japanese$\rightarrow $Vietnamese system, we apply LSW algorithm on the English texts from (4) while selecting 1-best synonym for each OOV word. The number of replaced words on English texts are indicated in the Table TABREF22. Again, we obtained a bigger gain of +0.99 (+1.02) BLEU points in English$\rightarrow $Vietnamese direction. Compared to the most recent work BIBREF25, our system reports an improvement of +0.47 standard BLEU points on the same dataset.
We investigate some examples of translations generated by the English$\rightarrow $Vietnamese systems with our proposed methods in the Table TABREF23. The bold texts in red color present correct or approximate translations while the italic texts in gray color denote incorrect translations. The first example, we consider two words: presentation and the unknown word applauded. The word presentation is predicted correctly as Vietnamese"bài thuyết trình" in most cases when we combined source context through embeddings. The unknown word applauded which has not seen in the vocabulary is ignored in the first two cases (baseline and source embedding) but it is roughly translated as Vietnamese"hoan nghênh" in the SAA because it is separated into applaud and ed. In the second example, we observe the translations of the unknown word tryout, they are mistaken in the first three cases but in the LSW, it is predicted with a closer meaning as Vietnamese"bài kiểm tra" due to the replacement by its synonymous word as test.
Related Works
Addressing unknown words was mentioned early in the Statistical Machine Translation (SMT) systems. Some typical studies as: BIBREF26 proposed four techniques to overcome this situation by extend the morphology and spelling of words or using a bilingual dictionary or transliterating for names. These approaches are difficult when manipulate to different domains. BIBREF27 trained word embedding models to learn word similarity from monolingual data and an unknown word are then replaced by a its similar word. BIBREF28 used a linear model to learn maps between source and target spaces base on a small initial bilingual dictionary to find the translations of source words. However, in NMT, there are not so many works tackling this problem. BIBREF7 use a very large vocabulary to solve unknown words. BIBREF6 generate a dictionary from alignment data based on annotated corpus to decide the hypotheses of unknown words. BIBREF3 have introduced the solutions for dealing with the rare word problem, however, their models require more parameters, thus, decreasing the overall efficiency.
In another direction, BIBREF9 exploited the BPE algorithm to reduce number of unknown words in NMT and achieved significant efficiency on many language pairs. The second approach presented in this works follows this direction when instead of using an unsupervised method to split rare words and unknown words into sub-words that are able to translate, we use a supervised method. Our third approach using WordNet can be seen as a smoothing way, when we use the translations of the synonymous words to approximate the translation of an OOV word. Another work followed this direction is worth to mention is BIBREF29, when they use the morphological and semantic information as the factors of the words to help translating rare words.
Conclusion
In this study, we have proposed three difference strategies to handle rare words in NMT, in which the combination of methods brings significant improvements to the NMT systems on two low-resource language pairs. In future works, we will consider selecting some appropriate synonymous words for the source sentence from n-best synonymous words to further improve the performance of the NMT systems and leverage more unsupervised methods based on monolingual data to address rare word problem.
Acknowledgments
This work is supported by the project "Building a machine translation system to support translation of documents between Vietnamese and Japanese to help managers and businesses in Hanoi approach to Japanese market", No. TC.02-2016-03. | No |
7b3d207ed47ae58286029b62fd0c160a0145e73d | 7b3d207ed47ae58286029b62fd0c160a0145e73d_0 | Q: What is the dataset that is used in the paper?
Text: Introduction
The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.
Previous research in anomaly detection from text has evolved largely independently from financial applications. Unsupervised clustering methods have been applied to documents in order to identify outliers and emerging topics BIBREF3. Deviation analysis has been applied to text in order to identify errors in spelling BIBREF4 and tagging of documents BIBREF5. Recent popularity of distributional semantics BIBREF6 has led to further advances in semantic deviation analysis BIBREF7. However, current research remains largely divorced from specific applications within the domain of finance.
In the following sections, we enumerate major applications of anomaly detection from text in the financial domain, and contextualize them within current research topics in Natural Language Processing.
Five views on anomaly
Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.
In this section, we lay out five perspectives on how textual anomaly detection can be applied in the context of finance, and how each application opens up opportunities for NLP researchers to apply current research to the financial domain.
Five views on anomaly ::: Anomaly as error
Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.
Optical-character-recognition from images is another error-prone process with large applicability to finance. Many financial reports and presentations are circulated as image documents that need to undergo OCR in order to be machine-readable. OCR might also be applicable to satellite imagery and other forms of image data that might include important textual content such as a graphical representation of financial data. Errors that result from OCR'd documents can often be fixed using systems that have a robust semantic representation of the target domain. For instance, a model that is trained on financial reports might have encoded awareness that emojis are unlikely to appear in them or that it is unusual for the numeric value of profit to be higher than that of revenue.
Five views on anomaly ::: Anomaly as irregularity
Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.
BIBREF8 showed that when trained on similar domains or contexts, distributed representations of words are likely to be stable, where stability is measured as the similarity of their nearest neighbors in the distributed space. Such insight can be used to assess anomalies in this sense. As an example, BIBREF9 identified cliques of users on Twitter who consistently shared news from similar domains. Characterizing these networks as “echo-chambers,” they then represented the content shared by these echo-chambers as distributed representations. When certain topics from one echo-chamber began to deviate from similar topics in other echo-chambers, the content was tagged as unreliable. BIBREF9 showed that this method can be used to improve the performance of standard methods for fake-news detection.
In another study BIBREF10, the researchers hypothesized that transparent language in earnings calls indicates high expectations for performance in the upcoming quarters, whereas semantic ambiguity can signal a lack of confidence and expected poor performance. By quantifying transparency as the frequent use of numbers, shorter words, and unsophisticated vocabulary, they showed that a change in transparency is associated with a change in future performance.
Five views on anomaly ::: Anomaly as novelty
Anomaly can indicate a novel event or phenomenon that may or may not be risky. Breaking news stories often emerge as anomalous trends on social media. BIBREF11 experimented with this in their effort to detect novel events from Twitter conversations. By representing each event as a real-time cluster of tweets (where each tweet was encoded as a vector), they managed to assess the novelty of the event by comparing its centroid to the centroids of older events.
Novelty detection can also be used to detect emerging trends on social media, e.g. controversies that engulf various brands often start as small local events that are shared on social media and attract attention over a short period of time. How people respond to these events in early stages of development can be a measure of their veracity or controversiality BIBREF12, BIBREF13.
An anomaly in an industry grouping of companies can also be indicative of a company that is disrupting the norm for that industry and the emergence of a new sector or sub-sector. Often known as trail-blazers, these companies innovate faster than their competitors to meet market demands sometimes even before the consumer is aware of their need. As these companies continually evolve their business lines, their core operations are novel outliers from others in the same industry classification that can serve as meaningful signals of transforming industry demands.
Five views on anomaly ::: Anomaly as semantic richness
A large portion of text documents that analysts and researchers in the financial sectors consume have a regulatory nature. Annual financial reports, credit agreements, and filings with the U.S. Securities and Exchange Commission (SEC) are some of these types of documents. These documents can be tens or hundreds of pages long, and often include boilerplate language that the readers might need to skip or ignore in order to get to the “meat” of the content. Often, the abnormal clauses found in these documents are buried in standard text so as not to attract attention to the unique phrases.
BIBREF14 used smoothed representations of n-grams in SEC filings in order to identify boilerplate and abnormal language. They did so by comparing the probability of each n-gram against the company's previous filings, against other filings in the same sector, and against other filings from companies with similar market cap. The aim was to assist accounting analysts in skipping boilerplate language and focusing their attention on important snippets in these documents.
Similar methods can be applied to credit agreements where covenants and clauses that are too common are often ignored by risk analysts and special attention is paid to clauses that “stand out” from similar agreements.
Five views on anomaly ::: Anomaly as contextual relevance
Certain types of documents include universal as well as context-specific signals. As an example, consider a given company's financial reports. The reports may include standard financial metrics such as total revenue, net sales, net income, etc. In addition to these universal metrics, businesses often report their performance in terms of the performance of their operating segments. These segments can be business divisions, products, services, or regional operations. The segments are often specific to the company or its peers. For example, Apple Inc.'s segments might include “iPhone,” “iMac,” “iPad,” and “services.” The same segments will not appear in reports by other businesses.
For many analysts and researchers, operating segments are a crucial part of exploratory or predictive analysis. They use performance metrics associated with these segments to compare the business to its competitors, to estimate its market share, and to project the overall performance of the business in upcoming quarters. Automating the identification and normalization of these metrics can facilitate more insightful analytical research. Since these segments are often specific to each business, supervised models that are trained on a diverse set of companies cannot capture them without overfitting to certain companies. Instead, these segments can be treated as company-specific anomalies.
Anomaly detection via language modeling
Unlike numeric data, text data is not directly machine-readable, and requires some form of transformation as a pre-processing step. In “bag-of-words” methods, this transformation can take place by assigning an index number to each word, and representing any block of text as an unordered set of these words. A slightly more sophisticated approach might chain words into continuous “n-grams” and represent a block of text as an ordered series of “n-grams” that have been extracted on a sliding window of size n. These approaches are conventionally known as “language modeling.”
Since the advent of high-powered processors enabled the widespread use of distributed representations, language modeling has rapidly evolved and adapted to these new capabilities. Recurrent neural networks can capture an arbitrarily long sequence of text and perform various tasks such as classification or text generation BIBREF16. In this new context, language modeling often refers to training a recurrent network that predicts a word in a given sequence of text BIBREF17. Language models are easy to train because even though they follow a predictive mechanism, they do not need any labeled data, and are thus unsupervised.
Figure FIGREF6 is a simple illustration of how a neural network that is composed of recurrent units such as Long-Short Term Memory (LSTM) BIBREF18 can perform language modeling. The are four main components to the network:
The input vectors ($x_i$), which represent units (i.e. characters, words, phrases, sentences, paragraphs, etc.) in the input text. Occasionally, these are represented by one-hot vectors that assign a unique index to each particular input. More commonly, these vectors are adapted from a pre-trained corpus, where distributed representations have been inferred either by a simpler auto-encoding process BIBREF19 or by applying the same recurrent model to a baseline corpus such as Wikipedia BIBREF17.
The output vectors ($y_i$), which represent the model's prediction of the next word in the sequence. Naturally, they are represented in the same dimensionality as $x_i$s.
The hidden vectors ($h_i$), which are often randomly initialized and learned through backpropagation. Often trained as dense representations, these vectors tend to display characteristics that indicate semantic richness BIBREF20 and compositionality BIBREF19. While the language model can be used as a text-generation mechanism, the hidden vectors are a strong side product that are sometimes extracted and reused as augmented features in other machine learning systems BIBREF21.
The weights of the network ($W_{ij}$) (or other parameters in the network), which are tuned through backpropagation. These often indicate how each vectors in the input or hidden sequence is utilized to generate the output. These parameters play a big role in the way the output of neural networks are reverse-engineered or explained to the end user .
The distributions of any of the above-mentioned components can be studied to mine signals for anomalous behavior in the context of irregularity, error, novelty, semantic richness, or contextual relevance.
Anomaly detection via language modeling ::: Anomaly in input vectors
As previously mentioned, the input vectors to a text-based neural network are often adapted from publicly-available word vector corpora. In simpler architectures, the network is allowed to back-propagate its errors all the way to the input layer, which might cause the input vectors to be modified. This can serve as a signal for anomaly in the semantic distributions between the original vectors and the modified vectors.
Analyzing the stability of word vectors when trained on different iterations can also signal anomalous trends BIBREF8.
Anomaly detection via language modeling ::: Anomaly in output vectors
As previously mentioned, language models generate a probability distribution over a word (or character) in a sequence. These probabilities can be used to detect transcription or character-recognition errors in a domain-friendly manner. When the language model is trained on financial data, domain-specific trends (such as the use of commas and parentheses in financial metrics) can be captured and accounted for by the network, minimizing the rate of false positives.
Anomaly detection via language modeling ::: Anomaly in hidden vectors
A recent advancement in text processing is the introduction of fine-tuning methods to neural networks trained on text BIBREF17. Fine-tuning is an approach that facilitates the transfer of semantic knowledge from one domain (source) to another domain (target). The source domain is often large and generic, such as web data or the Wikipedia corpus, while the target domain is often specific (e.g. SEC filings). A network is pre-trained on the source corpus such that its hidden representations are enriched. Next, the pre-trained networks is re-trained on the target domain, but this time only the final (or top few) layers are tuned and the parameters in the remaining layers remain “frozen.” The top-most layer of the network can be modified to perform a classification, prediction, or generation task in the target domain (see Figure FIGREF15).
Fine-tuning aims to change the distribution of hidden representations in such a way that important information about the source domain is preserved, while idiosyncrasies of the target domain are captured in an effective manner BIBREF22. A similar process can be used to determine anomalies in documents. As an example, consider a model that is pre-trained on historical documents from a given sector. If fine-tuning the model on recent documents from the same sector dramatically shifts the representations for certain vectors, this can signal an evolving trend.
Anomaly detection via language modeling ::: Anomaly in weight tensors and other parameters
Models that have interpretable parameters can be used to identify areas of deviation or anomalous content. Attention mechanisms BIBREF23 allow the network to account for certain input signals more than others. The learned attention mechanism can provide insight into potential anomalies in the input. Consider a language model that predicts the social media engagement for a given tweet. Such a model can be used to distinguish between engaging and information-rich content versus clickbait, bot-generated, propagandistic, or promotional content by exposing how, for these categories, engagement is associated with attention to certain distributions of “trigger words.”
Table TABREF17 lists four scenarios for using the various layers and parameters of a language model in order to perform anomaly detection from text.
Challenges and Future Research
Like many other domains, in the financial domain, the application of language models as a measurement for semantic regularity of text bears the challenge of dealing with unseen input. Unseen input can be mistaken for anomaly, especially in systems that are designed for error detection. As an example, a system that is trained to correct errors in an earnings call transcript might treat named entities such as the names of a company's executives, or a recent acquisition, as anomalies. This problem is particularly prominent in fine-tuned language models, which are pre-trained on generic corpora that might not include domain-specific terms.
When anomalies are of a malicious nature, such as in the case where abnormal clauses are included in credit agreements, the implementation of the anomalous content is adapted to appear normal. Thereby, the task of detecting normal language becomes more difficult.
Alternatively, in the case of language used by executives in company presentations such as earnings calls, there may be a lot of noise in the data due to the large degree of variability in the personalities and linguistic patterns of various leaders. The noise variability present in this content could be similar to actual anomalies, hence making it difficult to identify true anomalies.
Factors related to market interactions and competitive behavior can also impact the effectiveness of anomaly-detection models. In detecting the emergence of a new industry sector, it may be challenging for a system to detect novelty when a collection of companies, rather than a single company, behave in an anomalous way. The former may be the more common real-world scenario as companies closely monitor and mimic the innovations of their competitors. The exact notion of anomaly can also vary based on the sector and point in time. For example, in the technology sector, the norm in today's world is one of continuous innovation and technological advancements.
Additionally, certain types of anomaly can interact and make it difficult for systems to distinguish between them. As an example, a system that is trained to identify the operating segments of a company tends to distinguish between information that is specific to the company, and information that is common across different companies. As a result, it might identify the names of the company's board of directors or its office locations as its operating segments.
Traditional machine learning models have previously tackled the above challenges, and solutions are likely to emerge in the neural paradigms as well. Any future research in these directions will have to account for the impact of such solutions on the reliability and explainability of the resulting models and their robustness against adversarial data.
Conclusion
Anomaly detection from text can have numerous applications in finance, including risk detection, predictive analysis, error correction, and peer detection. We have outlined various perspectives on how anomaly can be interpreted in the context of finance, and corresponding views on how language modeling can be used to detect such aspects of anomalous content. We hope that this paper lays the groundwork for establishing a framework for understanding the opportunities and risks associated with these methods when applied in the financial domain. | Unanswerable |
d58c264068d8ca04bb98038b4894560b571bab3e | d58c264068d8ca04bb98038b4894560b571bab3e_0 | Q: What is the performance of the models discussed in the paper?
Text: Introduction
The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.
Previous research in anomaly detection from text has evolved largely independently from financial applications. Unsupervised clustering methods have been applied to documents in order to identify outliers and emerging topics BIBREF3. Deviation analysis has been applied to text in order to identify errors in spelling BIBREF4 and tagging of documents BIBREF5. Recent popularity of distributional semantics BIBREF6 has led to further advances in semantic deviation analysis BIBREF7. However, current research remains largely divorced from specific applications within the domain of finance.
In the following sections, we enumerate major applications of anomaly detection from text in the financial domain, and contextualize them within current research topics in Natural Language Processing.
Five views on anomaly
Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.
In this section, we lay out five perspectives on how textual anomaly detection can be applied in the context of finance, and how each application opens up opportunities for NLP researchers to apply current research to the financial domain.
Five views on anomaly ::: Anomaly as error
Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.
Optical-character-recognition from images is another error-prone process with large applicability to finance. Many financial reports and presentations are circulated as image documents that need to undergo OCR in order to be machine-readable. OCR might also be applicable to satellite imagery and other forms of image data that might include important textual content such as a graphical representation of financial data. Errors that result from OCR'd documents can often be fixed using systems that have a robust semantic representation of the target domain. For instance, a model that is trained on financial reports might have encoded awareness that emojis are unlikely to appear in them or that it is unusual for the numeric value of profit to be higher than that of revenue.
Five views on anomaly ::: Anomaly as irregularity
Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.
BIBREF8 showed that when trained on similar domains or contexts, distributed representations of words are likely to be stable, where stability is measured as the similarity of their nearest neighbors in the distributed space. Such insight can be used to assess anomalies in this sense. As an example, BIBREF9 identified cliques of users on Twitter who consistently shared news from similar domains. Characterizing these networks as “echo-chambers,” they then represented the content shared by these echo-chambers as distributed representations. When certain topics from one echo-chamber began to deviate from similar topics in other echo-chambers, the content was tagged as unreliable. BIBREF9 showed that this method can be used to improve the performance of standard methods for fake-news detection.
In another study BIBREF10, the researchers hypothesized that transparent language in earnings calls indicates high expectations for performance in the upcoming quarters, whereas semantic ambiguity can signal a lack of confidence and expected poor performance. By quantifying transparency as the frequent use of numbers, shorter words, and unsophisticated vocabulary, they showed that a change in transparency is associated with a change in future performance.
Five views on anomaly ::: Anomaly as novelty
Anomaly can indicate a novel event or phenomenon that may or may not be risky. Breaking news stories often emerge as anomalous trends on social media. BIBREF11 experimented with this in their effort to detect novel events from Twitter conversations. By representing each event as a real-time cluster of tweets (where each tweet was encoded as a vector), they managed to assess the novelty of the event by comparing its centroid to the centroids of older events.
Novelty detection can also be used to detect emerging trends on social media, e.g. controversies that engulf various brands often start as small local events that are shared on social media and attract attention over a short period of time. How people respond to these events in early stages of development can be a measure of their veracity or controversiality BIBREF12, BIBREF13.
An anomaly in an industry grouping of companies can also be indicative of a company that is disrupting the norm for that industry and the emergence of a new sector or sub-sector. Often known as trail-blazers, these companies innovate faster than their competitors to meet market demands sometimes even before the consumer is aware of their need. As these companies continually evolve their business lines, their core operations are novel outliers from others in the same industry classification that can serve as meaningful signals of transforming industry demands.
Five views on anomaly ::: Anomaly as semantic richness
A large portion of text documents that analysts and researchers in the financial sectors consume have a regulatory nature. Annual financial reports, credit agreements, and filings with the U.S. Securities and Exchange Commission (SEC) are some of these types of documents. These documents can be tens or hundreds of pages long, and often include boilerplate language that the readers might need to skip or ignore in order to get to the “meat” of the content. Often, the abnormal clauses found in these documents are buried in standard text so as not to attract attention to the unique phrases.
BIBREF14 used smoothed representations of n-grams in SEC filings in order to identify boilerplate and abnormal language. They did so by comparing the probability of each n-gram against the company's previous filings, against other filings in the same sector, and against other filings from companies with similar market cap. The aim was to assist accounting analysts in skipping boilerplate language and focusing their attention on important snippets in these documents.
Similar methods can be applied to credit agreements where covenants and clauses that are too common are often ignored by risk analysts and special attention is paid to clauses that “stand out” from similar agreements.
Five views on anomaly ::: Anomaly as contextual relevance
Certain types of documents include universal as well as context-specific signals. As an example, consider a given company's financial reports. The reports may include standard financial metrics such as total revenue, net sales, net income, etc. In addition to these universal metrics, businesses often report their performance in terms of the performance of their operating segments. These segments can be business divisions, products, services, or regional operations. The segments are often specific to the company or its peers. For example, Apple Inc.'s segments might include “iPhone,” “iMac,” “iPad,” and “services.” The same segments will not appear in reports by other businesses.
For many analysts and researchers, operating segments are a crucial part of exploratory or predictive analysis. They use performance metrics associated with these segments to compare the business to its competitors, to estimate its market share, and to project the overall performance of the business in upcoming quarters. Automating the identification and normalization of these metrics can facilitate more insightful analytical research. Since these segments are often specific to each business, supervised models that are trained on a diverse set of companies cannot capture them without overfitting to certain companies. Instead, these segments can be treated as company-specific anomalies.
Anomaly detection via language modeling
Unlike numeric data, text data is not directly machine-readable, and requires some form of transformation as a pre-processing step. In “bag-of-words” methods, this transformation can take place by assigning an index number to each word, and representing any block of text as an unordered set of these words. A slightly more sophisticated approach might chain words into continuous “n-grams” and represent a block of text as an ordered series of “n-grams” that have been extracted on a sliding window of size n. These approaches are conventionally known as “language modeling.”
Since the advent of high-powered processors enabled the widespread use of distributed representations, language modeling has rapidly evolved and adapted to these new capabilities. Recurrent neural networks can capture an arbitrarily long sequence of text and perform various tasks such as classification or text generation BIBREF16. In this new context, language modeling often refers to training a recurrent network that predicts a word in a given sequence of text BIBREF17. Language models are easy to train because even though they follow a predictive mechanism, they do not need any labeled data, and are thus unsupervised.
Figure FIGREF6 is a simple illustration of how a neural network that is composed of recurrent units such as Long-Short Term Memory (LSTM) BIBREF18 can perform language modeling. The are four main components to the network:
The input vectors ($x_i$), which represent units (i.e. characters, words, phrases, sentences, paragraphs, etc.) in the input text. Occasionally, these are represented by one-hot vectors that assign a unique index to each particular input. More commonly, these vectors are adapted from a pre-trained corpus, where distributed representations have been inferred either by a simpler auto-encoding process BIBREF19 or by applying the same recurrent model to a baseline corpus such as Wikipedia BIBREF17.
The output vectors ($y_i$), which represent the model's prediction of the next word in the sequence. Naturally, they are represented in the same dimensionality as $x_i$s.
The hidden vectors ($h_i$), which are often randomly initialized and learned through backpropagation. Often trained as dense representations, these vectors tend to display characteristics that indicate semantic richness BIBREF20 and compositionality BIBREF19. While the language model can be used as a text-generation mechanism, the hidden vectors are a strong side product that are sometimes extracted and reused as augmented features in other machine learning systems BIBREF21.
The weights of the network ($W_{ij}$) (or other parameters in the network), which are tuned through backpropagation. These often indicate how each vectors in the input or hidden sequence is utilized to generate the output. These parameters play a big role in the way the output of neural networks are reverse-engineered or explained to the end user .
The distributions of any of the above-mentioned components can be studied to mine signals for anomalous behavior in the context of irregularity, error, novelty, semantic richness, or contextual relevance.
Anomaly detection via language modeling ::: Anomaly in input vectors
As previously mentioned, the input vectors to a text-based neural network are often adapted from publicly-available word vector corpora. In simpler architectures, the network is allowed to back-propagate its errors all the way to the input layer, which might cause the input vectors to be modified. This can serve as a signal for anomaly in the semantic distributions between the original vectors and the modified vectors.
Analyzing the stability of word vectors when trained on different iterations can also signal anomalous trends BIBREF8.
Anomaly detection via language modeling ::: Anomaly in output vectors
As previously mentioned, language models generate a probability distribution over a word (or character) in a sequence. These probabilities can be used to detect transcription or character-recognition errors in a domain-friendly manner. When the language model is trained on financial data, domain-specific trends (such as the use of commas and parentheses in financial metrics) can be captured and accounted for by the network, minimizing the rate of false positives.
Anomaly detection via language modeling ::: Anomaly in hidden vectors
A recent advancement in text processing is the introduction of fine-tuning methods to neural networks trained on text BIBREF17. Fine-tuning is an approach that facilitates the transfer of semantic knowledge from one domain (source) to another domain (target). The source domain is often large and generic, such as web data or the Wikipedia corpus, while the target domain is often specific (e.g. SEC filings). A network is pre-trained on the source corpus such that its hidden representations are enriched. Next, the pre-trained networks is re-trained on the target domain, but this time only the final (or top few) layers are tuned and the parameters in the remaining layers remain “frozen.” The top-most layer of the network can be modified to perform a classification, prediction, or generation task in the target domain (see Figure FIGREF15).
Fine-tuning aims to change the distribution of hidden representations in such a way that important information about the source domain is preserved, while idiosyncrasies of the target domain are captured in an effective manner BIBREF22. A similar process can be used to determine anomalies in documents. As an example, consider a model that is pre-trained on historical documents from a given sector. If fine-tuning the model on recent documents from the same sector dramatically shifts the representations for certain vectors, this can signal an evolving trend.
Anomaly detection via language modeling ::: Anomaly in weight tensors and other parameters
Models that have interpretable parameters can be used to identify areas of deviation or anomalous content. Attention mechanisms BIBREF23 allow the network to account for certain input signals more than others. The learned attention mechanism can provide insight into potential anomalies in the input. Consider a language model that predicts the social media engagement for a given tweet. Such a model can be used to distinguish between engaging and information-rich content versus clickbait, bot-generated, propagandistic, or promotional content by exposing how, for these categories, engagement is associated with attention to certain distributions of “trigger words.”
Table TABREF17 lists four scenarios for using the various layers and parameters of a language model in order to perform anomaly detection from text.
Challenges and Future Research
Like many other domains, in the financial domain, the application of language models as a measurement for semantic regularity of text bears the challenge of dealing with unseen input. Unseen input can be mistaken for anomaly, especially in systems that are designed for error detection. As an example, a system that is trained to correct errors in an earnings call transcript might treat named entities such as the names of a company's executives, or a recent acquisition, as anomalies. This problem is particularly prominent in fine-tuned language models, which are pre-trained on generic corpora that might not include domain-specific terms.
When anomalies are of a malicious nature, such as in the case where abnormal clauses are included in credit agreements, the implementation of the anomalous content is adapted to appear normal. Thereby, the task of detecting normal language becomes more difficult.
Alternatively, in the case of language used by executives in company presentations such as earnings calls, there may be a lot of noise in the data due to the large degree of variability in the personalities and linguistic patterns of various leaders. The noise variability present in this content could be similar to actual anomalies, hence making it difficult to identify true anomalies.
Factors related to market interactions and competitive behavior can also impact the effectiveness of anomaly-detection models. In detecting the emergence of a new industry sector, it may be challenging for a system to detect novelty when a collection of companies, rather than a single company, behave in an anomalous way. The former may be the more common real-world scenario as companies closely monitor and mimic the innovations of their competitors. The exact notion of anomaly can also vary based on the sector and point in time. For example, in the technology sector, the norm in today's world is one of continuous innovation and technological advancements.
Additionally, certain types of anomaly can interact and make it difficult for systems to distinguish between them. As an example, a system that is trained to identify the operating segments of a company tends to distinguish between information that is specific to the company, and information that is common across different companies. As a result, it might identify the names of the company's board of directors or its office locations as its operating segments.
Traditional machine learning models have previously tackled the above challenges, and solutions are likely to emerge in the neural paradigms as well. Any future research in these directions will have to account for the impact of such solutions on the reliability and explainability of the resulting models and their robustness against adversarial data.
Conclusion
Anomaly detection from text can have numerous applications in finance, including risk detection, predictive analysis, error correction, and peer detection. We have outlined various perspectives on how anomaly can be interpreted in the context of finance, and corresponding views on how language modeling can be used to detect such aspects of anomalous content. We hope that this paper lays the groundwork for establishing a framework for understanding the opportunities and risks associated with these methods when applied in the financial domain. | Unanswerable |
f80d89fb905b3e7e17af1fe179b6f441405ad79b | f80d89fb905b3e7e17af1fe179b6f441405ad79b_0 | Q: Does the paper consider the use of perplexity in order to identify text anomalies?
Text: Introduction
The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.
Previous research in anomaly detection from text has evolved largely independently from financial applications. Unsupervised clustering methods have been applied to documents in order to identify outliers and emerging topics BIBREF3. Deviation analysis has been applied to text in order to identify errors in spelling BIBREF4 and tagging of documents BIBREF5. Recent popularity of distributional semantics BIBREF6 has led to further advances in semantic deviation analysis BIBREF7. However, current research remains largely divorced from specific applications within the domain of finance.
In the following sections, we enumerate major applications of anomaly detection from text in the financial domain, and contextualize them within current research topics in Natural Language Processing.
Five views on anomaly
Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.
In this section, we lay out five perspectives on how textual anomaly detection can be applied in the context of finance, and how each application opens up opportunities for NLP researchers to apply current research to the financial domain.
Five views on anomaly ::: Anomaly as error
Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.
Optical-character-recognition from images is another error-prone process with large applicability to finance. Many financial reports and presentations are circulated as image documents that need to undergo OCR in order to be machine-readable. OCR might also be applicable to satellite imagery and other forms of image data that might include important textual content such as a graphical representation of financial data. Errors that result from OCR'd documents can often be fixed using systems that have a robust semantic representation of the target domain. For instance, a model that is trained on financial reports might have encoded awareness that emojis are unlikely to appear in them or that it is unusual for the numeric value of profit to be higher than that of revenue.
Five views on anomaly ::: Anomaly as irregularity
Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.
BIBREF8 showed that when trained on similar domains or contexts, distributed representations of words are likely to be stable, where stability is measured as the similarity of their nearest neighbors in the distributed space. Such insight can be used to assess anomalies in this sense. As an example, BIBREF9 identified cliques of users on Twitter who consistently shared news from similar domains. Characterizing these networks as “echo-chambers,” they then represented the content shared by these echo-chambers as distributed representations. When certain topics from one echo-chamber began to deviate from similar topics in other echo-chambers, the content was tagged as unreliable. BIBREF9 showed that this method can be used to improve the performance of standard methods for fake-news detection.
In another study BIBREF10, the researchers hypothesized that transparent language in earnings calls indicates high expectations for performance in the upcoming quarters, whereas semantic ambiguity can signal a lack of confidence and expected poor performance. By quantifying transparency as the frequent use of numbers, shorter words, and unsophisticated vocabulary, they showed that a change in transparency is associated with a change in future performance.
Five views on anomaly ::: Anomaly as novelty
Anomaly can indicate a novel event or phenomenon that may or may not be risky. Breaking news stories often emerge as anomalous trends on social media. BIBREF11 experimented with this in their effort to detect novel events from Twitter conversations. By representing each event as a real-time cluster of tweets (where each tweet was encoded as a vector), they managed to assess the novelty of the event by comparing its centroid to the centroids of older events.
Novelty detection can also be used to detect emerging trends on social media, e.g. controversies that engulf various brands often start as small local events that are shared on social media and attract attention over a short period of time. How people respond to these events in early stages of development can be a measure of their veracity or controversiality BIBREF12, BIBREF13.
An anomaly in an industry grouping of companies can also be indicative of a company that is disrupting the norm for that industry and the emergence of a new sector or sub-sector. Often known as trail-blazers, these companies innovate faster than their competitors to meet market demands sometimes even before the consumer is aware of their need. As these companies continually evolve their business lines, their core operations are novel outliers from others in the same industry classification that can serve as meaningful signals of transforming industry demands.
Five views on anomaly ::: Anomaly as semantic richness
A large portion of text documents that analysts and researchers in the financial sectors consume have a regulatory nature. Annual financial reports, credit agreements, and filings with the U.S. Securities and Exchange Commission (SEC) are some of these types of documents. These documents can be tens or hundreds of pages long, and often include boilerplate language that the readers might need to skip or ignore in order to get to the “meat” of the content. Often, the abnormal clauses found in these documents are buried in standard text so as not to attract attention to the unique phrases.
BIBREF14 used smoothed representations of n-grams in SEC filings in order to identify boilerplate and abnormal language. They did so by comparing the probability of each n-gram against the company's previous filings, against other filings in the same sector, and against other filings from companies with similar market cap. The aim was to assist accounting analysts in skipping boilerplate language and focusing their attention on important snippets in these documents.
Similar methods can be applied to credit agreements where covenants and clauses that are too common are often ignored by risk analysts and special attention is paid to clauses that “stand out” from similar agreements.
Five views on anomaly ::: Anomaly as contextual relevance
Certain types of documents include universal as well as context-specific signals. As an example, consider a given company's financial reports. The reports may include standard financial metrics such as total revenue, net sales, net income, etc. In addition to these universal metrics, businesses often report their performance in terms of the performance of their operating segments. These segments can be business divisions, products, services, or regional operations. The segments are often specific to the company or its peers. For example, Apple Inc.'s segments might include “iPhone,” “iMac,” “iPad,” and “services.” The same segments will not appear in reports by other businesses.
For many analysts and researchers, operating segments are a crucial part of exploratory or predictive analysis. They use performance metrics associated with these segments to compare the business to its competitors, to estimate its market share, and to project the overall performance of the business in upcoming quarters. Automating the identification and normalization of these metrics can facilitate more insightful analytical research. Since these segments are often specific to each business, supervised models that are trained on a diverse set of companies cannot capture them without overfitting to certain companies. Instead, these segments can be treated as company-specific anomalies.
Anomaly detection via language modeling
Unlike numeric data, text data is not directly machine-readable, and requires some form of transformation as a pre-processing step. In “bag-of-words” methods, this transformation can take place by assigning an index number to each word, and representing any block of text as an unordered set of these words. A slightly more sophisticated approach might chain words into continuous “n-grams” and represent a block of text as an ordered series of “n-grams” that have been extracted on a sliding window of size n. These approaches are conventionally known as “language modeling.”
Since the advent of high-powered processors enabled the widespread use of distributed representations, language modeling has rapidly evolved and adapted to these new capabilities. Recurrent neural networks can capture an arbitrarily long sequence of text and perform various tasks such as classification or text generation BIBREF16. In this new context, language modeling often refers to training a recurrent network that predicts a word in a given sequence of text BIBREF17. Language models are easy to train because even though they follow a predictive mechanism, they do not need any labeled data, and are thus unsupervised.
Figure FIGREF6 is a simple illustration of how a neural network that is composed of recurrent units such as Long-Short Term Memory (LSTM) BIBREF18 can perform language modeling. The are four main components to the network:
The input vectors ($x_i$), which represent units (i.e. characters, words, phrases, sentences, paragraphs, etc.) in the input text. Occasionally, these are represented by one-hot vectors that assign a unique index to each particular input. More commonly, these vectors are adapted from a pre-trained corpus, where distributed representations have been inferred either by a simpler auto-encoding process BIBREF19 or by applying the same recurrent model to a baseline corpus such as Wikipedia BIBREF17.
The output vectors ($y_i$), which represent the model's prediction of the next word in the sequence. Naturally, they are represented in the same dimensionality as $x_i$s.
The hidden vectors ($h_i$), which are often randomly initialized and learned through backpropagation. Often trained as dense representations, these vectors tend to display characteristics that indicate semantic richness BIBREF20 and compositionality BIBREF19. While the language model can be used as a text-generation mechanism, the hidden vectors are a strong side product that are sometimes extracted and reused as augmented features in other machine learning systems BIBREF21.
The weights of the network ($W_{ij}$) (or other parameters in the network), which are tuned through backpropagation. These often indicate how each vectors in the input or hidden sequence is utilized to generate the output. These parameters play a big role in the way the output of neural networks are reverse-engineered or explained to the end user .
The distributions of any of the above-mentioned components can be studied to mine signals for anomalous behavior in the context of irregularity, error, novelty, semantic richness, or contextual relevance.
Anomaly detection via language modeling ::: Anomaly in input vectors
As previously mentioned, the input vectors to a text-based neural network are often adapted from publicly-available word vector corpora. In simpler architectures, the network is allowed to back-propagate its errors all the way to the input layer, which might cause the input vectors to be modified. This can serve as a signal for anomaly in the semantic distributions between the original vectors and the modified vectors.
Analyzing the stability of word vectors when trained on different iterations can also signal anomalous trends BIBREF8.
Anomaly detection via language modeling ::: Anomaly in output vectors
As previously mentioned, language models generate a probability distribution over a word (or character) in a sequence. These probabilities can be used to detect transcription or character-recognition errors in a domain-friendly manner. When the language model is trained on financial data, domain-specific trends (such as the use of commas and parentheses in financial metrics) can be captured and accounted for by the network, minimizing the rate of false positives.
Anomaly detection via language modeling ::: Anomaly in hidden vectors
A recent advancement in text processing is the introduction of fine-tuning methods to neural networks trained on text BIBREF17. Fine-tuning is an approach that facilitates the transfer of semantic knowledge from one domain (source) to another domain (target). The source domain is often large and generic, such as web data or the Wikipedia corpus, while the target domain is often specific (e.g. SEC filings). A network is pre-trained on the source corpus such that its hidden representations are enriched. Next, the pre-trained networks is re-trained on the target domain, but this time only the final (or top few) layers are tuned and the parameters in the remaining layers remain “frozen.” The top-most layer of the network can be modified to perform a classification, prediction, or generation task in the target domain (see Figure FIGREF15).
Fine-tuning aims to change the distribution of hidden representations in such a way that important information about the source domain is preserved, while idiosyncrasies of the target domain are captured in an effective manner BIBREF22. A similar process can be used to determine anomalies in documents. As an example, consider a model that is pre-trained on historical documents from a given sector. If fine-tuning the model on recent documents from the same sector dramatically shifts the representations for certain vectors, this can signal an evolving trend.
Anomaly detection via language modeling ::: Anomaly in weight tensors and other parameters
Models that have interpretable parameters can be used to identify areas of deviation or anomalous content. Attention mechanisms BIBREF23 allow the network to account for certain input signals more than others. The learned attention mechanism can provide insight into potential anomalies in the input. Consider a language model that predicts the social media engagement for a given tweet. Such a model can be used to distinguish between engaging and information-rich content versus clickbait, bot-generated, propagandistic, or promotional content by exposing how, for these categories, engagement is associated with attention to certain distributions of “trigger words.”
Table TABREF17 lists four scenarios for using the various layers and parameters of a language model in order to perform anomaly detection from text.
Challenges and Future Research
Like many other domains, in the financial domain, the application of language models as a measurement for semantic regularity of text bears the challenge of dealing with unseen input. Unseen input can be mistaken for anomaly, especially in systems that are designed for error detection. As an example, a system that is trained to correct errors in an earnings call transcript might treat named entities such as the names of a company's executives, or a recent acquisition, as anomalies. This problem is particularly prominent in fine-tuned language models, which are pre-trained on generic corpora that might not include domain-specific terms.
When anomalies are of a malicious nature, such as in the case where abnormal clauses are included in credit agreements, the implementation of the anomalous content is adapted to appear normal. Thereby, the task of detecting normal language becomes more difficult.
Alternatively, in the case of language used by executives in company presentations such as earnings calls, there may be a lot of noise in the data due to the large degree of variability in the personalities and linguistic patterns of various leaders. The noise variability present in this content could be similar to actual anomalies, hence making it difficult to identify true anomalies.
Factors related to market interactions and competitive behavior can also impact the effectiveness of anomaly-detection models. In detecting the emergence of a new industry sector, it may be challenging for a system to detect novelty when a collection of companies, rather than a single company, behave in an anomalous way. The former may be the more common real-world scenario as companies closely monitor and mimic the innovations of their competitors. The exact notion of anomaly can also vary based on the sector and point in time. For example, in the technology sector, the norm in today's world is one of continuous innovation and technological advancements.
Additionally, certain types of anomaly can interact and make it difficult for systems to distinguish between them. As an example, a system that is trained to identify the operating segments of a company tends to distinguish between information that is specific to the company, and information that is common across different companies. As a result, it might identify the names of the company's board of directors or its office locations as its operating segments.
Traditional machine learning models have previously tackled the above challenges, and solutions are likely to emerge in the neural paradigms as well. Any future research in these directions will have to account for the impact of such solutions on the reliability and explainability of the resulting models and their robustness against adversarial data.
Conclusion
Anomaly detection from text can have numerous applications in finance, including risk detection, predictive analysis, error correction, and peer detection. We have outlined various perspectives on how anomaly can be interpreted in the context of finance, and corresponding views on how language modeling can be used to detect such aspects of anomalous content. We hope that this paper lays the groundwork for establishing a framework for understanding the opportunities and risks associated with these methods when applied in the financial domain. | No |
5f6fac08c97c85d5f4f4d56d8b0691292696f8e6 | 5f6fac08c97c85d5f4f4d56d8b0691292696f8e6_0 | Q: Does the paper report a baseline for the task?
Text: Introduction
The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.
Previous research in anomaly detection from text has evolved largely independently from financial applications. Unsupervised clustering methods have been applied to documents in order to identify outliers and emerging topics BIBREF3. Deviation analysis has been applied to text in order to identify errors in spelling BIBREF4 and tagging of documents BIBREF5. Recent popularity of distributional semantics BIBREF6 has led to further advances in semantic deviation analysis BIBREF7. However, current research remains largely divorced from specific applications within the domain of finance.
In the following sections, we enumerate major applications of anomaly detection from text in the financial domain, and contextualize them within current research topics in Natural Language Processing.
Five views on anomaly
Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.
In this section, we lay out five perspectives on how textual anomaly detection can be applied in the context of finance, and how each application opens up opportunities for NLP researchers to apply current research to the financial domain.
Five views on anomaly ::: Anomaly as error
Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.
Optical-character-recognition from images is another error-prone process with large applicability to finance. Many financial reports and presentations are circulated as image documents that need to undergo OCR in order to be machine-readable. OCR might also be applicable to satellite imagery and other forms of image data that might include important textual content such as a graphical representation of financial data. Errors that result from OCR'd documents can often be fixed using systems that have a robust semantic representation of the target domain. For instance, a model that is trained on financial reports might have encoded awareness that emojis are unlikely to appear in them or that it is unusual for the numeric value of profit to be higher than that of revenue.
Five views on anomaly ::: Anomaly as irregularity
Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.
BIBREF8 showed that when trained on similar domains or contexts, distributed representations of words are likely to be stable, where stability is measured as the similarity of their nearest neighbors in the distributed space. Such insight can be used to assess anomalies in this sense. As an example, BIBREF9 identified cliques of users on Twitter who consistently shared news from similar domains. Characterizing these networks as “echo-chambers,” they then represented the content shared by these echo-chambers as distributed representations. When certain topics from one echo-chamber began to deviate from similar topics in other echo-chambers, the content was tagged as unreliable. BIBREF9 showed that this method can be used to improve the performance of standard methods for fake-news detection.
In another study BIBREF10, the researchers hypothesized that transparent language in earnings calls indicates high expectations for performance in the upcoming quarters, whereas semantic ambiguity can signal a lack of confidence and expected poor performance. By quantifying transparency as the frequent use of numbers, shorter words, and unsophisticated vocabulary, they showed that a change in transparency is associated with a change in future performance.
Five views on anomaly ::: Anomaly as novelty
Anomaly can indicate a novel event or phenomenon that may or may not be risky. Breaking news stories often emerge as anomalous trends on social media. BIBREF11 experimented with this in their effort to detect novel events from Twitter conversations. By representing each event as a real-time cluster of tweets (where each tweet was encoded as a vector), they managed to assess the novelty of the event by comparing its centroid to the centroids of older events.
Novelty detection can also be used to detect emerging trends on social media, e.g. controversies that engulf various brands often start as small local events that are shared on social media and attract attention over a short period of time. How people respond to these events in early stages of development can be a measure of their veracity or controversiality BIBREF12, BIBREF13.
An anomaly in an industry grouping of companies can also be indicative of a company that is disrupting the norm for that industry and the emergence of a new sector or sub-sector. Often known as trail-blazers, these companies innovate faster than their competitors to meet market demands sometimes even before the consumer is aware of their need. As these companies continually evolve their business lines, their core operations are novel outliers from others in the same industry classification that can serve as meaningful signals of transforming industry demands.
Five views on anomaly ::: Anomaly as semantic richness
A large portion of text documents that analysts and researchers in the financial sectors consume have a regulatory nature. Annual financial reports, credit agreements, and filings with the U.S. Securities and Exchange Commission (SEC) are some of these types of documents. These documents can be tens or hundreds of pages long, and often include boilerplate language that the readers might need to skip or ignore in order to get to the “meat” of the content. Often, the abnormal clauses found in these documents are buried in standard text so as not to attract attention to the unique phrases.
BIBREF14 used smoothed representations of n-grams in SEC filings in order to identify boilerplate and abnormal language. They did so by comparing the probability of each n-gram against the company's previous filings, against other filings in the same sector, and against other filings from companies with similar market cap. The aim was to assist accounting analysts in skipping boilerplate language and focusing their attention on important snippets in these documents.
Similar methods can be applied to credit agreements where covenants and clauses that are too common are often ignored by risk analysts and special attention is paid to clauses that “stand out” from similar agreements.
Five views on anomaly ::: Anomaly as contextual relevance
Certain types of documents include universal as well as context-specific signals. As an example, consider a given company's financial reports. The reports may include standard financial metrics such as total revenue, net sales, net income, etc. In addition to these universal metrics, businesses often report their performance in terms of the performance of their operating segments. These segments can be business divisions, products, services, or regional operations. The segments are often specific to the company or its peers. For example, Apple Inc.'s segments might include “iPhone,” “iMac,” “iPad,” and “services.” The same segments will not appear in reports by other businesses.
For many analysts and researchers, operating segments are a crucial part of exploratory or predictive analysis. They use performance metrics associated with these segments to compare the business to its competitors, to estimate its market share, and to project the overall performance of the business in upcoming quarters. Automating the identification and normalization of these metrics can facilitate more insightful analytical research. Since these segments are often specific to each business, supervised models that are trained on a diverse set of companies cannot capture them without overfitting to certain companies. Instead, these segments can be treated as company-specific anomalies.
Anomaly detection via language modeling
Unlike numeric data, text data is not directly machine-readable, and requires some form of transformation as a pre-processing step. In “bag-of-words” methods, this transformation can take place by assigning an index number to each word, and representing any block of text as an unordered set of these words. A slightly more sophisticated approach might chain words into continuous “n-grams” and represent a block of text as an ordered series of “n-grams” that have been extracted on a sliding window of size n. These approaches are conventionally known as “language modeling.”
Since the advent of high-powered processors enabled the widespread use of distributed representations, language modeling has rapidly evolved and adapted to these new capabilities. Recurrent neural networks can capture an arbitrarily long sequence of text and perform various tasks such as classification or text generation BIBREF16. In this new context, language modeling often refers to training a recurrent network that predicts a word in a given sequence of text BIBREF17. Language models are easy to train because even though they follow a predictive mechanism, they do not need any labeled data, and are thus unsupervised.
Figure FIGREF6 is a simple illustration of how a neural network that is composed of recurrent units such as Long-Short Term Memory (LSTM) BIBREF18 can perform language modeling. The are four main components to the network:
The input vectors ($x_i$), which represent units (i.e. characters, words, phrases, sentences, paragraphs, etc.) in the input text. Occasionally, these are represented by one-hot vectors that assign a unique index to each particular input. More commonly, these vectors are adapted from a pre-trained corpus, where distributed representations have been inferred either by a simpler auto-encoding process BIBREF19 or by applying the same recurrent model to a baseline corpus such as Wikipedia BIBREF17.
The output vectors ($y_i$), which represent the model's prediction of the next word in the sequence. Naturally, they are represented in the same dimensionality as $x_i$s.
The hidden vectors ($h_i$), which are often randomly initialized and learned through backpropagation. Often trained as dense representations, these vectors tend to display characteristics that indicate semantic richness BIBREF20 and compositionality BIBREF19. While the language model can be used as a text-generation mechanism, the hidden vectors are a strong side product that are sometimes extracted and reused as augmented features in other machine learning systems BIBREF21.
The weights of the network ($W_{ij}$) (or other parameters in the network), which are tuned through backpropagation. These often indicate how each vectors in the input or hidden sequence is utilized to generate the output. These parameters play a big role in the way the output of neural networks are reverse-engineered or explained to the end user .
The distributions of any of the above-mentioned components can be studied to mine signals for anomalous behavior in the context of irregularity, error, novelty, semantic richness, or contextual relevance.
Anomaly detection via language modeling ::: Anomaly in input vectors
As previously mentioned, the input vectors to a text-based neural network are often adapted from publicly-available word vector corpora. In simpler architectures, the network is allowed to back-propagate its errors all the way to the input layer, which might cause the input vectors to be modified. This can serve as a signal for anomaly in the semantic distributions between the original vectors and the modified vectors.
Analyzing the stability of word vectors when trained on different iterations can also signal anomalous trends BIBREF8.
Anomaly detection via language modeling ::: Anomaly in output vectors
As previously mentioned, language models generate a probability distribution over a word (or character) in a sequence. These probabilities can be used to detect transcription or character-recognition errors in a domain-friendly manner. When the language model is trained on financial data, domain-specific trends (such as the use of commas and parentheses in financial metrics) can be captured and accounted for by the network, minimizing the rate of false positives.
Anomaly detection via language modeling ::: Anomaly in hidden vectors
A recent advancement in text processing is the introduction of fine-tuning methods to neural networks trained on text BIBREF17. Fine-tuning is an approach that facilitates the transfer of semantic knowledge from one domain (source) to another domain (target). The source domain is often large and generic, such as web data or the Wikipedia corpus, while the target domain is often specific (e.g. SEC filings). A network is pre-trained on the source corpus such that its hidden representations are enriched. Next, the pre-trained networks is re-trained on the target domain, but this time only the final (or top few) layers are tuned and the parameters in the remaining layers remain “frozen.” The top-most layer of the network can be modified to perform a classification, prediction, or generation task in the target domain (see Figure FIGREF15).
Fine-tuning aims to change the distribution of hidden representations in such a way that important information about the source domain is preserved, while idiosyncrasies of the target domain are captured in an effective manner BIBREF22. A similar process can be used to determine anomalies in documents. As an example, consider a model that is pre-trained on historical documents from a given sector. If fine-tuning the model on recent documents from the same sector dramatically shifts the representations for certain vectors, this can signal an evolving trend.
Anomaly detection via language modeling ::: Anomaly in weight tensors and other parameters
Models that have interpretable parameters can be used to identify areas of deviation or anomalous content. Attention mechanisms BIBREF23 allow the network to account for certain input signals more than others. The learned attention mechanism can provide insight into potential anomalies in the input. Consider a language model that predicts the social media engagement for a given tweet. Such a model can be used to distinguish between engaging and information-rich content versus clickbait, bot-generated, propagandistic, or promotional content by exposing how, for these categories, engagement is associated with attention to certain distributions of “trigger words.”
Table TABREF17 lists four scenarios for using the various layers and parameters of a language model in order to perform anomaly detection from text.
Challenges and Future Research
Like many other domains, in the financial domain, the application of language models as a measurement for semantic regularity of text bears the challenge of dealing with unseen input. Unseen input can be mistaken for anomaly, especially in systems that are designed for error detection. As an example, a system that is trained to correct errors in an earnings call transcript might treat named entities such as the names of a company's executives, or a recent acquisition, as anomalies. This problem is particularly prominent in fine-tuned language models, which are pre-trained on generic corpora that might not include domain-specific terms.
When anomalies are of a malicious nature, such as in the case where abnormal clauses are included in credit agreements, the implementation of the anomalous content is adapted to appear normal. Thereby, the task of detecting normal language becomes more difficult.
Alternatively, in the case of language used by executives in company presentations such as earnings calls, there may be a lot of noise in the data due to the large degree of variability in the personalities and linguistic patterns of various leaders. The noise variability present in this content could be similar to actual anomalies, hence making it difficult to identify true anomalies.
Factors related to market interactions and competitive behavior can also impact the effectiveness of anomaly-detection models. In detecting the emergence of a new industry sector, it may be challenging for a system to detect novelty when a collection of companies, rather than a single company, behave in an anomalous way. The former may be the more common real-world scenario as companies closely monitor and mimic the innovations of their competitors. The exact notion of anomaly can also vary based on the sector and point in time. For example, in the technology sector, the norm in today's world is one of continuous innovation and technological advancements.
Additionally, certain types of anomaly can interact and make it difficult for systems to distinguish between them. As an example, a system that is trained to identify the operating segments of a company tends to distinguish between information that is specific to the company, and information that is common across different companies. As a result, it might identify the names of the company's board of directors or its office locations as its operating segments.
Traditional machine learning models have previously tackled the above challenges, and solutions are likely to emerge in the neural paradigms as well. Any future research in these directions will have to account for the impact of such solutions on the reliability and explainability of the resulting models and their robustness against adversarial data.
Conclusion
Anomaly detection from text can have numerous applications in finance, including risk detection, predictive analysis, error correction, and peer detection. We have outlined various perspectives on how anomaly can be interpreted in the context of finance, and corresponding views on how language modeling can be used to detect such aspects of anomalous content. We hope that this paper lays the groundwork for establishing a framework for understanding the opportunities and risks associated with these methods when applied in the financial domain. | No |
6adec34d86095643e6b89cda5c7cd94f64381acc | 6adec34d86095643e6b89cda5c7cd94f64381acc_0 | Q: What non-contextual properties do they refer to?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | These features are derived directly from the word and capture the general tendency of a word being echoed in explanations. |
62ba1fefc1eb826fe0cbac092d37a3e2098967e9 | 62ba1fefc1eb826fe0cbac092d37a3e2098967e9_0 | Q: What is the baseline?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | random method , LSTM |
93ac147765ee2573923f68aa47741d4bcbf88fa8 | 93ac147765ee2573923f68aa47741d4bcbf88fa8_0 | Q: What are their proposed features?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | Non-contextual properties of a word, Word usage in an OP or PC (two groups), How a word connects an OP and PC., General OP/PC properties |
14c0328e8ec6360a913b8ecb3e50cb27650ff768 | 14c0328e8ec6360a913b8ecb3e50cb27650ff768_0 | Q: What are overall baseline results on new this new task?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | all of our models outperform the random baseline by a wide margin, he best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116) |
6073fa9050da76eeecd8aa3ccc7ecb16a238d83f | 6073fa9050da76eeecd8aa3ccc7ecb16a238d83f_0 | Q: What metrics are used in evaluation of this task?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | F1 score |
eacd7e540cc34cb45770fcba463f4bf968681d59 | eacd7e540cc34cb45770fcba463f4bf968681d59_0 | Q: Do authors provide any explanation for intriguing patterns of word being echoed?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | No |
1124804c3702499b78cf0678bab5867e81284b6c | 1124804c3702499b78cf0678bab5867e81284b6c_0 | Q: What features are proposed?
Text: Introduction
Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.
Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.
An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”
These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.
Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.
We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.
To summarize, our main contributions are:
[itemsep=0pt,leftmargin=*,topsep=0pt]
We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.
We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.
We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.
We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.
Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html.
Related Work
To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.
The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.
Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.
In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.
Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.
Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion.
Dataset
Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.
The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.
Throughout this paper, we use the following terminology:
[itemsep=-5pt,leftmargin=*,topsep=0pt]
An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.
A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\Delta $).
A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”
An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.
Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.
Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.
Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.
Below, we highlight some notable features of explanations as they appear in our dataset.
The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.
Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.
Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed.
Understanding the Pointers in Explanations
To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\mathcal {V}_{\text{OP}}, \mathcal {V}_{\text{PC}}, \mathcal {V}_{\text{EXP}}$. We then define the label for each word in the OP or PC, $w \in \mathcal {V}_{\text{OP}} \cup \mathcal {V}_{\text{PC}}$, based on the explanation as follows:
Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):
[itemsep=0pt,leftmargin=*,topsep=0pt]
Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.
Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.
How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.
General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.
Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:
[itemsep=0pt,leftmargin=*,topsep=0pt]
Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.
OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.
Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.
Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.
Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion.
Predicting Pointers
We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words.
Predicting Pointers ::: Experiment setup
We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.
To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.
Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).
Predicting Pointers ::: Prediction Performance
Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.
Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.
Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.
Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.
Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.
As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.
Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.
Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.
Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.
Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44.
Predicting Pointers ::: The Effect on Generating Explanations
One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.
Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work.
Concluding Discussions
In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.
Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.
There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50.
Acknowledgments
We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research.
Supplemental Material ::: Preprocessing.
Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:
Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.
We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\s)]*).
We replace “$\Delta $” symbols and their analogues—such as “$\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.
Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.
We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.
Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.
We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.
Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper.
Supplemental Material ::: PC Echoing OP
Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs.
Supplemental Material ::: Feature Calculation
Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.
Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\log \frac{N}{\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.
Stem length: the number of characters in the stem.
Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.
Wordnet depth (max): similarly, this is the length of the maximum hypernym path.
Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.
OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\frac{1}{16}$.
OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\frac{1}{3}$.
OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.
OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.
OP # of surface forms: the number of different surface forms for the given stem.
OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\frac{1}{2}$.
OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.
OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.
PC equivalents of features 6-30.
In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.
# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.
# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.
Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.
Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.
OP length: the number of tokens in the OP.
PC length: the number of tokens in the PC.
Length difference: the absolute value of the difference between OP length and PC length.
Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.
OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.
Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears.
Supplemental Material ::: Word–level Prediction Task
For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.
For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.
For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\lbrace 10^{x}:x\in \lbrace -1, 0, 1, 2, 3, 4\rbrace \rbrace $, and the respective weights of the negative and positive classes taking values from $\lbrace (x, 1-x): x\in \lbrace 0.25, 0.20, 0.15\rbrace \rbrace $.
We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.
Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances.
Supplemental Material ::: Generating Explanations
We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82. | Non-contextual properties of a word, Word usage in an OP or PC (two groups), How a word connects an OP and PC, General OP/PC properties |
2b78052314cb730824836ea69bc968df7964b4e4 | 2b78052314cb730824836ea69bc968df7964b4e4_0 | Q: Which datasets are used to train this model?
Text: Introduction
Asking relevant and intelligent questions has always been an integral part of human learning, as it can help assess the user's understanding of a piece of text (an article, an essay etc.). However, forming questions manually can be sometimes arduous. Automated question generation (QG) systems can help alleviate this problem by learning to generate questions on a large scale and in lesser time. Such a system has applications in a myriad of areas such as FAQ generation, intelligent tutoring systems, and virtual assistants.
The task for a QG system is to generate meaningful, syntactically correct, semantically sound and natural questions from text. Additionally, to further automate the assessment of human users, it is highly desirable that the questions are relevant to the text and have supporting answers present in the text.
Figure 1 below shows a sample of questions generated by our approach using a variety of configurations (vanilla sentence, feature tagged sentence and answer encoded sentence) that will be described later in this paper.
Initial attempts at automated question generation were heavily dependent on a limited, ad-hoc, hand-crafted set of rules BIBREF0 , BIBREF1 . These rules focus mainly on the syntactic structure of the text and are limited in their application, only to sentences of simple structures. Recently, the success of sequence to sequence learning models BIBREF2 opened up possibilities of looking beyond a fixed set of rules for the task of question generation BIBREF3 , BIBREF4 . When we encode ground truth answers into the sentence along with other linguistic features, we get improvement of upto 4 BLEU points along with improvement in the quality of questions generated. A recent deep learning approach to question generation BIBREF3 investigates a simpler task of generating questions only from a triplet of subject, relation and object. In contrast, we build upon recent works that train sequence to sequence models for generating questions from natural language text.
Our work significantly improves the latest work of sequence to sequence learning based question generation using deep networks BIBREF4 by making use of (i) an additional module to predict span of best answer candidate on which to generate the question (ii) several additional rich set of linguistic features to help model generalize better (iii) suitably modified decoder to generate questions more relevant to the sentence.
The rest of the paper is organized as follows. In Section "Problem Formulation" we formally describe our question generation problem, followed by a discussion on related work in Section "Related Work" . In Section "Approach and Contributions" we describe our approach and methodology and summarize our main contributions. In Sections "Named Entity Selection" and "Question Generation" we describe the two main components of our framework. Implementation details of the models are described in Section "Implementation Details" , followed by experimental results in Section "Experiments and Results" and conclusion in Section "Conclusion" .
Problem Formulation
Given a sentence S, viewed as a sequence of words, our goal is to generate a question Q, which is syntactically and semantically correct, meaningful and natural. More formally, given a sentence S, our model's main objective is to learn the underlying conditional probability distribution $P(\textbf {Q}|\textbf {S};\theta )$ parameterized by $\theta $ to generate the most appropriate question that is closest to the human generated question(s). Our model learns $\theta $ during training using sentence/question pairs such that the probability $P(\textbf {Q}|\textbf {S};\theta $ ) is maximized over the given training dataset.
Let the sentence S be a sequence of $M$ words $(w_1, w_2, w_3, ...w_M)$ , and question Q a sequence of $N$ words $(y_1, y_2, y_3,...y_N)$ . Mathematically, the model is meant to generate Q* such that:
$$\mathbf {Q^* } & = & \underset{\textbf {Q}}{\operatorname{argmax}}~P(\textbf {Q}|\textbf {S};\theta ) \\ & = & \underset{y_1,..y_{n}}{\operatorname{argmax}}~\prod _{i=1}^{N}P(y_i|y_1,..y_{i-1},w_1..w_M;\theta )$$ (Eq. 3)
Equation ( 3 ) is to be realized using a RNN-based architecture, which is described in detail in Section UID17 .
Related Work
Heilman and Smith BIBREF0 use a set of hand-crafted syntax-based rules to generate questions from simple declarative sentences. The system identifies multiple possible answer phrases from all declarative sentences using the constituency parse tree structure of each sentence. The system then over-generates questions and ranks them statistically by assigning scores using logistic regression.
BIBREF1 use semantics of the text by converting it into the Minimal Recursion Semantics notation BIBREF5 . Rules specific to the summarized semantics are applied to generate questions. Most of the approaches proposed for the QGSTEC challenge BIBREF6 are also rule-based systems, some of which put to use sentence features such as part of speech (POS) tags and named entity relations (NER) tags. BIBREF7 use ASSERT (an automatic statistical semantic role tagger that can annotate naturally occurring text with semantic arguments) for semantic role parses, generate questions based on rules and rank them based on subtopic similarity score using ESSK (Extended String Subsequence Kernel). BIBREF8 break sentences into fine and coarse classes and proceed to generate questions based on templates matching these classes.
All approaches mentioned so far are heavily dependent on rules whose design requires deep linguistic knowledge and yet are not exhaustive enough. Recent successes in neural machine translation BIBREF2 , BIBREF9 have helped address this problem by letting deep neural nets learn the implicit rules through data. This approach has inspired application of sequence to sequence learning to automated question generation. BIBREF3 propose an attention-based BIBREF10 , BIBREF11 approach to question generation from a pre-defined template of knowledge base triples (subject, relation, object). Additionally, recent studies suggest that the sharp learning capability of neural networks does not make linguistic features redundant in machine translation. BIBREF12 suggest augmenting each word with its linguistic features such as POS, NER. BIBREF13 suggest a tree-based encoder to incorporate features, although for a different application.
We build on the recent sequence to sequence learning-based method of question generation by BIBREF4 , but with significant differences and improvements from all previous works in the following ways. (i) Unlike BIBREF4 our question generation technique is pivoted on identification of the best candidate answer (span) around which the question should be generated. (ii) Our approach is enhanced with the use of several syntactic and linguistic features that help in learning models that generalize well. (iii) We propose a modified decoder to generate questions relevant to the text.
Approach and Contributions
Our approach to generating question-answer pairs from text is a two-stage process: in the first stage we select the most relevant and appropriate candidate answer, i.e., the pivotal answer, using an answer selection module, and in the second stage we encode the answer span in the sentence and use a sequence to sequence model with a rich set of linguistic features to generate questions for the pivotal answer.
Our sentence encoder transforms the input sentence into a list of fixed-length continuous vector word representation, each input symbol being represented as a vector. The question decoder takes in the output from the sentence encoder and produces one symbol at a time and stops at the EOS (end of sentence) marker. To focus on certain important words while generating questions (decoding) we use a global attention mechanism. The attention module is connected to both the sentence encoder as well as the question decoder, thus allowing the question decoder to focus on appropriate segments of the sentence while generating the next word of the question. We include linguistic features for words so that the model can learn more generalized syntactic transformations. We provide a detailed description of these modules in the following sections. Here is a summary of our three main contributions: (1) a versatile neural network-based answer selection and Question Generation (QG) approach and an associated dataset of question/sentence pairs suitable for learning answer selection, (2) incorporation of linguistic features that help generalize the learning to syntactic and semantic transformations of the input, and (3) a modified decoder to generate the question most relevant to the text.
Answer Selection and Encoding
In applications such as reading comprehension, it is natural for a question to be generated keeping the answer in mind (hereafter referred to as the `pivotal' answer). Identifying the most appropriate pivotal answer will allow comprehension be tested more easily and with even higher automation. We propose a novel named entity selection model and answer selection model based on Pointer Networks BIBREF14 . These models give us the span of pivotal answer in the sentence, which we encode using the BIO notation while generating the questions.
Named Entity Selection
In our first approach, we restrict our pivotal answer to be one of the named entities in the sentence, extracted using the Stanford CoreNLP toolkit. To choose the most appropriate pivotal answer for QG from a set of candidate entities present in the sentence we propose a named entity selection model. We train a multi-layer perceptron on the sentence, named entities present in the sentence and the ground truth answer. The model learns to predict the pivotal answer given the sentence and a set of candidate entities. The sentence $S = (w_1, w_2, ... , w_n)$ is first encoded using a 2 layered unidirectional LSTM encoder into hidden activations $H = (h_1^s, h_2^s, ... , h_n^s)$ . For a named entity $NE = (w_i, ... , w_j)$ , a vector representation (R) is created as $<h_n^s;h_{mean}^s;h_{mean}^{ne}>$ , where $h_n^s$ is the final state of the hidden activations, $h_{mean}^s$ is the mean of all the activations and $h_{mean}^{ne}$ is the mean of hidden activations $(h_i^s, ... , h_j^s)$ between the span of the named entity. This representation vector R is fed into a multi-layer perceptron, which predicts the probability of a named entity being a pivotal answer. Then we select the entity with the highest probability as the answer entity. More formally,
$$P(NE_i|S) = softmax(\textbf {R}_i.W+B)$$ (Eq. 6)
where $W$ is weight, $B$ is bias, and $P(NE_i|S)$ is the probability of named entity being the pivotal answer.
Answer Selection using Pointer Networks
We propose a novel Pointer Network BIBREF14 based approach to find the span of pivotal answer given a sentence. Using the attention mechanism, a boundary Pointer Network output start and end positions from the input sequence. More formally, the problem can be formulated as follows: given a sentence S, we want to predict the start index $a_k^{start}$ and the end index $a_k^{end}$ of the pivotal answer. The main motivation in using a boundary pointer network is to predict the span from the input sequence as output. While we adapt the boundary pointer network to predict the start and end index positions of the pivotal answer in the sentence, we also present results using a sequence pointer network instead.
Answer sequence pointer network produces a sequence of pointers as output. Each pointer in the sequence is word index of some token in the input. It only ensures that output is contained in the sentence but isn't necessarily a substring. Let the encoder's hidden states be $H = (h_1,h_2,\ldots ,h_n)$ for a sentence the probability of generating output sequence $O$ = $(o_1,o_2,\ldots ,o_m)$ is defined as,
$$P(O|S) = \prod P(o_i|o_1,o_2,o_3,\ldots ,o_{i-1},H)$$ (Eq. 8)
We model the probability distribution as:
$$u^i = v^T tanh(W^e\hat{H}+W^dD_i)$$ (Eq. 9)
$$P(o_i|o_1,o_2,\ldots .,o_{i-1},H) = softmax(u^i)$$ (Eq. 10)
Here, $W^e\in R^{d \times 2d}$ , $W^D\in R^{d \times d}$ , $v\in R^d$ are the model parameters to be learned. $\hat{H}$ is ${<}H;0{>}$ , where a 0 vector is concatenated with LSTM encoder hidden states to produce an end pointer token. $D_i$ is produced by taking the last state of the LSTM decoder with inputs ${<}softmax(u^i)\hat{H};D_{i-1}{>}$ . $D_0$ is a zero vector denoting the start state of the decoder.
Answer boundary pointer network produces two tokens corresponding to the start and end index of the answer span. The probability distribution model remains exactly the same as answer sequence pointer network. The boundary pointer network is depicted in Figure 2 .
We take sentence S = $(w_1,w_2,\ldots ,w_M)$ and generate the hidden activations H by using embedding lookup and an LSTM encoder. As the pointers are not conditioned over a second sentence, the decoder is fed with just a start state.
Example: For the Sentence: “other past residents include composer journalist and newspaper editor william henry wills , ron goodwin , and journalist angela rippon and comedian dawn french”, the answer pointers produced are:
Pointer(s) by answer sequence: [6,11,20] $\rightarrow $ journalist henry rippon
Pointer(s) by answer boundary: [10,12] $\rightarrow $ william henry wills
Question Generation
After encoding the pivotal answer (prediction of the answer selection module) in a sentence, we train a sequence to sequence model augmented with a rich set of linguistic features to generate the question. In sections below we describe our linguistic features as well as our sequence to sequence model.
Sequence to Sequence Model
Sequence to sequence models BIBREF2 learn to map input sequence (sentence) to an intermediate fixed length vector representation using an encoder RNN along with the mapping for translating this vector representation to the output sequence (question) using another decoder RNN. Encoder of the sequence to sequence model first conceptualizes the sentence as a single fixed length vector before passing this along to the decoder which uses this vector and attention weights to generate the output.
Sentence Encoder: The sentence encoder is realized using a bi-directional LSTM. In the forward pass, the given sentence along with the linguistic features is fed through a recurrent activation function recursively till the whole sentence is processed. Using one LSTM as encoder will capture only the left side sentence dependencies of the current word being fed. To alleviate this and thus to also capture the right side dependencies of the sentence for the current word while predicting in the decoder stage, another LSTM is fed with the sentence in the reverse order. The combination of both is used as the encoding of the given sentence.
$$\overrightarrow{\hat{h}_t}=f(\overrightarrow{W}w_t + \overrightarrow{V}\overrightarrow{\hat{h}_{t-1}} +\overrightarrow{b})$$ (Eq. 13)
$$\overleftarrow{\hat{h}_t}=f(\overleftarrow{W}w_t + \overleftarrow{V}\overleftarrow{\hat{h}_{t+1}} +\overleftarrow{b})$$ (Eq. 14)
The hidden state $\hat{h_t}$ of the sentence encoder is used as the intermediate representation of the source sentence at time step $t$ whereas $W, V, U \in R^{n\times m}$ are weights, where m is the word embedding dimensionality, n is the number of hidden units, and $w_t \in R^{p\times q \times r} $ is the weight vector corresponding to feature encoded input at time step $t$ .
Attention Mechanism: In the commonly used sequence to sequence model ( BIBREF2 ), the decoder is directly initialized with intermediate source representation ( $\hat{h_t}$ ). Whereas the attention mechanism proposed in BIBREF11 suggests using a subset of source hidden states, giving more emphasis to a, possibly, more relevant part of the context in the source sentence while predicting a new word in the target sequence. In our method we specifically use the global attention mechanism. In this mechanism a context vector $c_t$ is generated by capturing relevant source side information for predicting the current target word $y_t$ in the decoding phase at time $t$ . Relevance between the current decoder hidden state $h_t$ and each of the source hidden states ( $\hat{h_1},\hat{h_2}...\hat{h_{N}}$ ) is realized through a dot similarity metric: $score(h_t,\hat{h_i}) = h_t^{T}\cdot \hat{h_i}$ .
A softmax layer ( 16 ) is applied over these scores to get the variable length alignment vector $\alpha _t$ which in turn is used to compute the weighted sum over all the source hidden states ( $\hat{h_1},\hat{h_2}, \ldots , \hat{h_N}$ ) to generate the context vector $c_t$ () at time $t$ .
$$\alpha _t(i) &= align(h_t,\hat{h_i}) =\frac{\exp (score(h_t,\hat{h_i}) }{\sum \limits _{i^{\prime }} \exp (score(h_t,\hat{h_{i^{\prime }}}))}\\ c_t &= \sum \limits _{i} \alpha _{ti} \hat{h_i}$$ (Eq. 16)
Question decoder is a two layer LSTM network. It takes output of sentence encoder and decodes it to generate question. The question decoder is designed to maximize our objective in equation 3 . More formally decoder computes probability $P(Q|S;\theta )$ as:
$$P(Q|S;\theta )=softmax(W_s(tanh(W_r[h_t,c_t]+b)))$$ (Eq. 18)
where $W_s$ and $W_r$ are weight vectors and tanh is the activation function. The hidden state of the decoder along with the context vector $c_t$ is used to predict the target word $y_t$ . It is a known fact that decoder may output words which are not even present in the source sentence as it learns a probability distribution over the words in the vocabulary. To generate questions relevant to the text we suitably modified decoder and integrated an attention mechanism (described in Section "Sequence to Sequence Model" ) with the decoder to attend to words in source sentence while generating questions. This modification to the decoder increases the relevance of question generated for a particular sentence.
Linguistic Features
We propose using a set of linguistic features so that the model can learn better generalized transformation rules, rather than learning a transformation rule per sentence. We describe our features below:
POS Tag: Parts of speech tag of the word. Words having same POS tag have similar grammatical properties and demonstrate similar syntactic behavior. We use the Stanford ConeNLP -pos annotator to get POS Tag of words in the sentence.
Named Entity Tag: Name entity tag represent coarse grained category of a word for example PERSON, PLACE, ORGANIZATION, DATE, etc. In order to help the model identify named entities present in the sentence, named entity tag of each word is provided as a feature. This ensures that the model learns to pose a question about the entities present in the sentence. We use the Stanford CoreNLP -ner annotator to assign named entity tag to each word.
Dependency Label: Dependency label of a word is the edge label connecting each word with the parent in the dependency parse tree. Root node of the tree is assigned label `ROOT'. Dependency label help models to learn inter-word relations. It helps in understanding the semantic structure of the sentence while generating question. Dependency structure also helps in learning syntactic transformations between sentence and question pair. Verbs and adverbs present in the sentence signify the type of the question (which, who .. etc.) that would be posed for the subject it refers to. We use dependency parse trees generated using the Stanford CoreNLP parser to obtain the dependency labels.
Linguistic features are added by the conventional feature concatenation of tokens using the delimiter ` $|$ '. We create separate vocabularies for words (encoded using glove's pre-trained word embedding) and features (using one-hot encoding) respectively.
Implementation Details
We implement our answer selection and question generation models in Torch. The sentence encoder of QG is a 3 layer bi-directional LSTM stack and the question decoder is a 3 layer LSTM stack. Each LSTM has a hidden unit of size 600 units. we use pre-trained glove embeddings BIBREF15 of 300 dimensions for both the encoder and the decoder. All model parameters are optimized using Adam optimizer with a learning rate of 1.0 and we decay the learning rate by 0.5 after 10th epoch of training. The dropout probability is set to 0.3. We train our model in each experiment for 30 epochs, we select the model with the lowest perplexity on validation set.
The linguistic features for each word such as POS, named entity tag etc., are incorporated along with word embeddings through concatenation.
Experiments and Results
We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\mathcal {S}\ $ are used for training ( ${\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\mathcal {S}}^{te}$ ).
We performed both human-based evaluation as well as automatic evaluation to assess the quality of the questions generated. For automatic evaluation, we report results using a metric widely used to evaluate machine translation systems, called BLEU BIBREF17 .
We first list the different systems (models) that we evaluate and compare in our experiments. A note about abbreviations: Whereas components in blue are different alternatives for encoding the pivotal answer, the brown color coded component represents the set of linguistic features that can be optionally added to any model.
Baseline System (QG): Our baseline system is a sequence-to-sequence LSTM model (see Section "Question Generation" ) trained only on raw sentence-question pairs without using features or answer encoding. This model is the same as BIBREF4 .
System with feature tagged input (QG+F): We encoded linguistic features (see Section "Linguistic Features " ) for each sentence-question pair to augment the basic QG model. This was achieved by appending features to each word using the “ $|$ ” delimiter. This model helps us analyze the isolated effect of incorporating syntactic and semantic properties of the sentence (and words in the sentence) on the outcome of question generation.
Features + NE encoding (QG+F+NE): We also augmented the feature-enriched sequence-to-sequence QG+F model by encoding each named entity predicted by the named entity selection module (see section "Named Entity Selection" ) as a pivotal answer. This model helps us analyze the effect of (indiscriminate) use of named entity as potential (pivotal) answer, when used in conjunction with features.
Ground truth answer encoding (QG+GAE): In this setting we use the encoding of ground truth answers from sentences to augment the training of the basic QG model (see Section "Named Entity Selection" ). For encoding answers into the sentence we employ the BIO notation. We append “B” as a feature using the delimiter “ $|$ ” to the first word of the answer and “I” as a feature for the rest of the answer words. We used this model to analyze the effect of answer encoding on question generation, independent of features and named entity alignment.
We would like to point out that any direct comparison of a generated question with the question in the ground truth using any machine translation-like metric (such as the BLEU metric discussed in Section "Results and Analysis " ) makes sense only when both the questions are associated with the same pivotal answer. This specific experimental setup and the ones that follow are therefore more amenable for evaluation using standard metrics used in machine translation.
Features + sequence pointer network predicted answer encoding (QG+F+AES): In this setting, we encoded the pivotal answer in the sentence as predicted by the sequence pointer network (see Section "Implementation Details" ) to augment the linguistic feature based QG+F model. In this and in the following setting, we expect the prediction of the pivotal answer in the sentence to closely approximate the ground truth answer.
Features + boundary pointer network predicted answer encoding (QG+F+AEB): In this setting, we encoded the pivotal answer in the sentence as predicted by the boundary pointer network (see Section "Implementation Details" ) to augment the linguistic feature based QG+F model.
Features + ground truth answer encoding (QG+F+GAE): In this experimental setup, building upon the previous model (QG+F), we encoded ground truth answers to augment the QG model.
Results and Analysis
We compare the performance of the 7 systems QG, QG+F, QG+F+NE, QG+GAE, QG+F+AES, QG+F+AEB and QG+F+GAE described in the previous sections on (the train-val-test splits of) ${\mathcal {S}}$ and report results using both human and automated evaluation metrics. We first describe experimental results using human evaluation followed by evaluation on other metrics.
Human Evaluation: We randomly selected 100 sentences from the test set ( ${\mathcal {S}}^{te}$ ) and generated one question using each of the 7 systems for each of these 100 sentences and asked three human experts for feedback on the quality of questions generated. Our human evaluators are professional English language experts. They were asked to provide feedback about a randomly sampled sentence along with the corresponding questions from each competing system, presented in an anonymised random order. This was to avoid creating any bias in the evaluator towards any particular system. They were not at all primed about the different models and the hypothesis.
We asked the following binary (yes/no) questions to each of the experts: a) is this question syntactically correct?, b) is this question semantically correct?, and c) is this question relevant to this sentence?. Responses from all three experts were collected and averaged. For example, suppose the cumulative scores of the 100 binary judgements for syntactic correctness by the 3 evaluators were $(80, 79, 73)$ . Then the average response would be 77.33. In Table 1 we present these results on the test set ${\mathcal {S}}^{te}$ .
Evaluation on other metrics: We also evaluated our system on other standard metrics to enable comparison with other systems. However, as explained earlier, the standard metrics used in machine translation such as BLEU BIBREF17 , METEOR BIBREF18 , and ROUGE-L BIBREF19 , might not be appropriate measures to evaluate the task of question generation. To appreciate this, consider the candidate question “who was the widow of mcdonald 's owner ?” against the ground truth “to whom was john b. kroc married ?” for the sentence “it was founded in 1986 through the donations of joan b. kroc , the widow of mcdonald 's owner ray kroc.”. It is easy to see that the candidate is a valid question and makes perfect sense. However its BLEU-4 score is almost zero. Thus, it may be the case that the human generated question against which we evaluate the system generated questions may be completely different in structure and semantics, but still be perfectly valid, as seen previously. While we find human evaluation to be more appropriate, for the sake of completeness, we also report the BLEU, METEOR and ROUGE-L scores in each setting.In Table 2 , we observe that our models, QG+F+AEB, QG+F+AES and QG+F+GAE outperform the state-of-the art question generation system QG BIBREF4 significantly on all standard metrics.
Our model QG+F+GAE, which encodes ground truth answers and uses a rich set of linguistic features, performs the best as per every metric. And in Table 1 , we observe that adding the rich set of linguistic features to the baseline model (QG) further improves performance. Specifically, addition of features increases syntactic correctness of questions by 2%, semantic correctness by 9% and relevance of questions with respect to sentence by 12.3% in comparison with the baseline model QG BIBREF4 .
In Figure 3 we present some sample answers predicted and corresponding questions generated by our model QG+F+AEB. Though not better, the performance of models QG+F+AES and QG+F+AEB is comparable to the best model (that is QG+F+GAE, which additionally uses ground truth answers). This is because the ground truth answer might not be the best and most relevant pivotal answer for question generation, particularly since each question in the SQUAD dataset was generated by looking at an entire paragraph and not any single sentence. Consider the sentence “manhattan was on track to have an estimated 90,000 hotel rooms at the end of 2014 , a 10 % increase from 2013 .”. On encoding the ground truth answer, “90,000”, the question generated using model QG+GAE is “what was manhattan estimated hotel rooms in 2014 ?” and and additionally, with linguistic features (QG+F+GAE), we get “how many hotel rooms did manhattan have at the end of 2014 ?”. This is indicative of how a rich set of linguistic features help in shaping the correct question type as well generating syntactically and semantically correct question. Further when we do not encode any answer (either pivotal answer predicted by sequence/boundary pointer network or ground truth answer) and just augment the linguistic features (QG+F) the question generated is “what was manhattan 's hotel increase in 2013 ?”, which is clearly a poor quality question. Thus, both answer encoding and augmenting rich set of linguistic features are important for generating high quality (syntactically correct, semantically correct and relevant) questions. When we select pivotal answer from amongst the set of named entities present in the sentence (i.e., model QG+F+NE), the question generated on encoding the named entity “manhattan” is “what was the 10 of hotel 's city rooms ?”, which is clearly a poor quality question. The poor performance of QG+F+NE can be attributed to the fact that only 50% of the answers in SQUAD dataset are named entities.
Conclusion
We introduce a novel two-stage process to generate question-answer pairs from text. We combine and enhance a number of techniques including sequence to sequence models, Pointer Networks, named entity alignment, as well as rich linguistic features to identify potential answers from text, handle rare words, and generate questions most relevant to the answer. To the best of our knowledge this is the first attempt in generating question-answer pairs. Our comprehensive evaluation shows that our approach significantly outperforms current state-of-the-art question generation techniques on both human evaluation and evaluation on common metrics such as BLEU, METEOR, and ROUGE-L. | SQUAD |
11d2f0d913d6e5f5695f8febe2b03c6c125b667c | 11d2f0d913d6e5f5695f8febe2b03c6c125b667c_0 | Q: How is performance of this system measured?
Text: Introduction
Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.
Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.
We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:
Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.
An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection.
Related Work
The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.
Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.
Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image.
Methodology
In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.
Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.
Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers.
Methodology ::: VQG model
The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:
where $K$ is the size of the vocabulary and $C$ is the length of the caption.
Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features.
Methodology ::: Chatbot network
The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.
The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.
The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25.
Datasets
One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows.
Datasets ::: MS-COCO, Bing and Flickr datasets
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.
Datasets ::: Persona-chat and Cornell-movie corpus
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
Validation
An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results.
Validation ::: Implementation
Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.
The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.
In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64.
Validation ::: Quantitative evaluation
We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.
Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation.
Validation ::: Qualitative results
Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.
We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.
Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.
About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with "$>$" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.
Overfitted chatbot model:
$>$ hi
Elisabot: hello ! how are you ?
$>$ fine and you?
Elisabot : i am good do you have any hobbies ?
$>$ yes i like to ski
Elisabot: i love to play sports . it is my favorite
$>$ I have a sister
Elisabot : i have a dog . i love them .
$>$ I have a dog too
Elisabot: i have a dog . i love them
$>$ ok
Elisabot : i am not really a good person
$>$ why?
Elisabot : i love to play sports . it is my favorite
Final chatbot model:
$>$ hello
Elisabot: hello how are you ?
$>$ fine and you?
Elisabot : i m great !
$>$ I have a sister, do you have any brother or sister?
Elisabot : i have a iphone but i have no idea .
$>$ ok
Elisabot : i am a pro fan of thrones
$>$ me too
Elisabot : i like to eat healthy
$>$ have you seen the last chapter?
Elisabot : no i have not .
$>$ where do you live?
Elisabot : i live in the mountains
Usability study
As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients.
Usability study ::: User interface
We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.
Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them.
Feedback from patients
We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.
Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.
At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:
Did you like it?
Did you find it engaging?
How difficult have you found it?
Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot.
Conclusions
We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.
The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.
This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them.
Acknowledgements
Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs. | using the BLEU score as a quantitative metric and human evaluation for quality |
1c85a25ec9d0c4f6622539f48346e23ff666cd5f | 1c85a25ec9d0c4f6622539f48346e23ff666cd5f_0 | Q: How many questions per image on average are available in dataset?
Text: Introduction
Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.
Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.
We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:
Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.
An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection.
Related Work
The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.
Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.
Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image.
Methodology
In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.
Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.
Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers.
Methodology ::: VQG model
The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:
where $K$ is the size of the vocabulary and $C$ is the length of the caption.
Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features.
Methodology ::: Chatbot network
The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.
The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.
The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25.
Datasets
One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows.
Datasets ::: MS-COCO, Bing and Flickr datasets
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.
Datasets ::: Persona-chat and Cornell-movie corpus
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
Validation
An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results.
Validation ::: Implementation
Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.
The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.
In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64.
Validation ::: Quantitative evaluation
We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.
Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation.
Validation ::: Qualitative results
Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.
We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.
Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.
About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with "$>$" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.
Overfitted chatbot model:
$>$ hi
Elisabot: hello ! how are you ?
$>$ fine and you?
Elisabot : i am good do you have any hobbies ?
$>$ yes i like to ski
Elisabot: i love to play sports . it is my favorite
$>$ I have a sister
Elisabot : i have a dog . i love them .
$>$ I have a dog too
Elisabot: i have a dog . i love them
$>$ ok
Elisabot : i am not really a good person
$>$ why?
Elisabot : i love to play sports . it is my favorite
Final chatbot model:
$>$ hello
Elisabot: hello how are you ?
$>$ fine and you?
Elisabot : i m great !
$>$ I have a sister, do you have any brother or sister?
Elisabot : i have a iphone but i have no idea .
$>$ ok
Elisabot : i am a pro fan of thrones
$>$ me too
Elisabot : i like to eat healthy
$>$ have you seen the last chapter?
Elisabot : no i have not .
$>$ where do you live?
Elisabot : i live in the mountains
Usability study
As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients.
Usability study ::: User interface
We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.
Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them.
Feedback from patients
We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.
Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.
At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:
Did you like it?
Did you find it engaging?
How difficult have you found it?
Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot.
Conclusions
We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.
The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.
This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them.
Acknowledgements
Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs. | 5 questions per image |
37d829cd42db9ae3d56ab30953a7cf9eda050841 | 37d829cd42db9ae3d56ab30953a7cf9eda050841_0 | Q: Is machine learning system underneath similar to image caption ML systems?
Text: Introduction
Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.
Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.
We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:
Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.
An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection.
Related Work
The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.
Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.
Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image.
Methodology
In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.
Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.
Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers.
Methodology ::: VQG model
The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:
where $K$ is the size of the vocabulary and $C$ is the length of the caption.
Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features.
Methodology ::: Chatbot network
The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.
The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.
The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25.
Datasets
One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows.
Datasets ::: MS-COCO, Bing and Flickr datasets
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.
Datasets ::: Persona-chat and Cornell-movie corpus
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
Validation
An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results.
Validation ::: Implementation
Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.
The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.
In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64.
Validation ::: Quantitative evaluation
We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.
Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation.
Validation ::: Qualitative results
Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.
We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.
Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.
About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with "$>$" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.
Overfitted chatbot model:
$>$ hi
Elisabot: hello ! how are you ?
$>$ fine and you?
Elisabot : i am good do you have any hobbies ?
$>$ yes i like to ski
Elisabot: i love to play sports . it is my favorite
$>$ I have a sister
Elisabot : i have a dog . i love them .
$>$ I have a dog too
Elisabot: i have a dog . i love them
$>$ ok
Elisabot : i am not really a good person
$>$ why?
Elisabot : i love to play sports . it is my favorite
Final chatbot model:
$>$ hello
Elisabot: hello how are you ?
$>$ fine and you?
Elisabot : i m great !
$>$ I have a sister, do you have any brother or sister?
Elisabot : i have a iphone but i have no idea .
$>$ ok
Elisabot : i am a pro fan of thrones
$>$ me too
Elisabot : i like to eat healthy
$>$ have you seen the last chapter?
Elisabot : no i have not .
$>$ where do you live?
Elisabot : i live in the mountains
Usability study
As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients.
Usability study ::: User interface
We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.
Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them.
Feedback from patients
We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.
Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.
At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:
Did you like it?
Did you find it engaging?
How difficult have you found it?
Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot.
Conclusions
We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.
The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.
This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them.
Acknowledgements
Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs. | Yes |
4b41f399b193d259fd6e24f3c6e95dc5cae926dd | 4b41f399b193d259fd6e24f3c6e95dc5cae926dd_0 | Q: How big dataset is used for training this system?
Text: Introduction
Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.
Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.
We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:
Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.
An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection.
Related Work
The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.
Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.
Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image.
Methodology
In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.
Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.
Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers.
Methodology ::: VQG model
The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:
where $K$ is the size of the vocabulary and $C$ is the length of the caption.
Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features.
Methodology ::: Chatbot network
The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.
The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.
The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25.
Datasets
One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows.
Datasets ::: MS-COCO, Bing and Flickr datasets
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.
Datasets ::: Persona-chat and Cornell-movie corpus
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
Validation
An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results.
Validation ::: Implementation
Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.
The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.
In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64.
Validation ::: Quantitative evaluation
We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.
Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation.
Validation ::: Qualitative results
Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.
We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.
Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.
About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with "$>$" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.
Overfitted chatbot model:
$>$ hi
Elisabot: hello ! how are you ?
$>$ fine and you?
Elisabot : i am good do you have any hobbies ?
$>$ yes i like to ski
Elisabot: i love to play sports . it is my favorite
$>$ I have a sister
Elisabot : i have a dog . i love them .
$>$ I have a dog too
Elisabot: i have a dog . i love them
$>$ ok
Elisabot : i am not really a good person
$>$ why?
Elisabot : i love to play sports . it is my favorite
Final chatbot model:
$>$ hello
Elisabot: hello how are you ?
$>$ fine and you?
Elisabot : i m great !
$>$ I have a sister, do you have any brother or sister?
Elisabot : i have a iphone but i have no idea .
$>$ ok
Elisabot : i am a pro fan of thrones
$>$ me too
Elisabot : i like to eat healthy
$>$ have you seen the last chapter?
Elisabot : no i have not .
$>$ where do you live?
Elisabot : i live in the mountains
Usability study
As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients.
Usability study ::: User interface
We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.
Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them.
Feedback from patients
We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.
Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.
At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:
Did you like it?
Did you find it engaging?
How difficult have you found it?
Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot.
Conclusions
We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.
The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.
This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them.
Acknowledgements
Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs. | For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues. |
76377e5bb7d0a374b0aefc54697ac9cd89d2eba8 | 76377e5bb7d0a374b0aefc54697ac9cd89d2eba8_0 | Q: How do they obtain word lattices from words?
Text: Introduction
Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation.
Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks.
Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time.
Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices.
In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required.
We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism.
Lattice CNNs
Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score.
Siamese Architecture
The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.
For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0
where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair.
Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input.
For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”.
Word Lattice
As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 .
Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus.
However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph.
Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations.
Lattice based CNN Layer
As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions.
Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0
where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0
where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary.
For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input.
Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section.
Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations.
Experiments
Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice.
Datasets
We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .
DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.
KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve.
The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens.
Evaluation Metrics
For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used.
Implementation Details
The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend.
Baselines
Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode.
Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba.
Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments.
DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores.
We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 .
Results
Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion.
BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR.
For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy.
To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature.
Analysis and Discussions
As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation.
Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion.
In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide.
For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice.
For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice.
To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases.
How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char.
From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful.
LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks.
Case Study
Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer.
Related Work
Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching.
In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement.
GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings.
Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters.
Conclusions
In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together.
Acknowledgments
This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng. | By considering words as vertices and generating directed edges between neighboring words within a sentence |
85aa125b3a15bbb6f99f91656ca2763e8fbdb0ff | 85aa125b3a15bbb6f99f91656ca2763e8fbdb0ff_0 | Q: Which metrics do they use to evaluate matching?
Text: Introduction
Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation.
Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks.
Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time.
Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices.
In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required.
We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism.
Lattice CNNs
Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score.
Siamese Architecture
The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.
For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0
where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair.
Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input.
For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”.
Word Lattice
As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 .
Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus.
However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph.
Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations.
Lattice based CNN Layer
As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions.
Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0
where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0
where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary.
For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input.
Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section.
Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations.
Experiments
Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice.
Datasets
We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .
DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.
KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve.
The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens.
Evaluation Metrics
For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used.
Implementation Details
The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend.
Baselines
Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode.
Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba.
Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments.
DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores.
We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 .
Results
Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion.
BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR.
For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy.
To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature.
Analysis and Discussions
As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation.
Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion.
In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide.
For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice.
For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice.
To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases.
How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char.
From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful.
LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks.
Case Study
Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer.
Related Work
Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching.
In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement.
GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings.
Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters.
Conclusions
In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together.
Acknowledgments
This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng. | Precision@1, Mean Average Precision, Mean Reciprocal Rank |
4b128f9e94d242a8e926bdcb240ece279d725729 | 4b128f9e94d242a8e926bdcb240ece279d725729_0 | Q: Which dataset(s) do they evaluate on?
Text: Introduction
Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation.
Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks.
Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time.
Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices.
In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required.
We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism.
Lattice CNNs
Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score.
Siamese Architecture
The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.
For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0
where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair.
Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input.
For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”.
Word Lattice
As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 .
Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus.
However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph.
Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations.
Lattice based CNN Layer
As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions.
Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0
where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0
where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary.
For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input.
Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section.
Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations.
Experiments
Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice.
Datasets
We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .
DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.
KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve.
The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens.
Evaluation Metrics
For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used.
Implementation Details
The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend.
Baselines
Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode.
Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba.
Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments.
DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores.
We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 .
Results
Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion.
BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR.
For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy.
To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature.
Analysis and Discussions
As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation.
Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion.
In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide.
For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice.
For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice.
To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases.
How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char.
From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful.
LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks.
Case Study
Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer.
Related Work
Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching.
In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement.
GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings.
Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters.
Conclusions
In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together.
Acknowledgments
This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng. | DBQA, KBRE |
f8f13576115992b0abb897ced185a4f9d35c5de9 | f8f13576115992b0abb897ced185a4f9d35c5de9_0 | Q: What languages do they look at?
Text: Introduction
The dynamics of language evolution is one of many interdisciplinary fields to which methods and insights from statistical physics have been successfully applied (see BIBREF0 for an overview, and BIBREF1 for a specific comprehensive review).
In this work we revisit the question of language coexistence. It is known that a sizeable fraction of the more than 6000 languages that are currently spoken, is in danger of becoming extinct BIBREF2, BIBREF3, BIBREF4. In pioneering work by Abrams and Strogatz BIBREF5, theoretical predictions were made to the effect that less attractive or otherwise unfavoured languages are generally doomed to extinction, when contacts between speakers of different languages become sufficiently frequent. Various subsequent investigations have corroborated this finding, emphasising that the simultaneous coexistence of competing languages is only possible in specific circumstances BIBREF6, BIBREF7, all of which share the common feature that they involve some symmetry breaking mechanism BIBREF1. A first scenario can be referred to as spatial symmetry breaking. Different competing languages may coexist in different geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. A second scenario corresponds to a more abstract internal symmetry breaking. Two or more competing languages may coexist at a given place if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Moreover, it has been shown that a stable population of bilinguals or multilinguals also favours the coexistence of several languages BIBREF14, BIBREF15, BIBREF16.
The aim of the present study is to provide a quantitative understanding of the conditions which ensure the coexistence of two or more competing languages within each of the symmetry breaking scenarios outlined above. Throughout this paper, in line with many earlier studies on the dynamics of languages BIBREF5, BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, and with an investigation of grammar acquisition BIBREF17, we describe the dynamics of the numbers of speakers of various languages by means of coupled rate equations. This approach is sometimes referred to as ecological modelling, because of its similarity with models used in theoretical ecology (see e.g. BIBREF18). From a broader perspective, systems of coupled differential equations, and especially Lotka-Volterra equations and replicator equations, are ubiquitous in game theory and in a broad range of areas in mathematical biology (see e.g. BIBREF19, BIBREF20, BIBREF21).
The plan of this paper is as follows. For greater clarity, we first consider in Section SECREF2 the situation of several competing languages in a single geographic area where the population is well mixed. We address the situation where internal symmetry is broken by imbalanced population dynamics. The relevant concepts are reviewed in detail in the case of two competing languages in Section SECREF1, and the full phase diagram of the model is derived. The case of an arbitrary number $N$ of competing languages is then considered in Section SECREF11 in full generality. The special situation where the attractivenesses of the languages are equally spaced is studied in Section SECREF22, whereas Section SECREF34 is devoted to the case where attractivenesses are modelled as random variables. Section SECREF3 is devoted to the situation where coexistence is due to spatial symmetry breaking. We focus our attention onto the simple case of two languages in competition on a linear array of $M$ distinct geographic areas. Language attractivenesses vary arbitrarily along the array, whereas migrations take place only between neighbouring areas at a uniform rate $\gamma $. A uniform consensus is reached at high migration rate, where the same language survives everywhere. This general result is demonstrated in detail for two geographic areas (Section SECREF57), and generalised to an arbitrary number $M$ of areas (Section SECREF67). The cases of ordered and random attractiveness profiles are investigated in Sections SECREF71 and SECREF84. In Section SECREF4 we present a non-technical discussion of our findings and their implications. Two appendices contain technical details about the regime of a large number of competing languages in a single geographic area (Appendix SECREF5) and about stability matrices and their spectra (Appendix SECREF6).
Breaking internal symmetry: language coexistence by imbalanced population dynamics
This section is devoted to the dynamics of languages in a single geographic area. As mentioned above, it has been shown that two or more competing languages may coexist only if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Our goal is to make these conditions more explicit and to provide a quantitative understanding of them.
Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: Two competing languages
We begin with the case of two competing languages. We assume that language 1 is more favoured than language 2. Throughout this work we neglect the effect of bilingualism, so that at any given time $t$ each individual speaks a single well-defined language. Let $X_1(t)$ and $X_2(t)$ denote the numbers of speakers of each language at time $t$, so that $X(t)=X_1(t)+X_2(t)$ is the total population of the area under consideration.
The dynamics of the model is defined by the coupled rate equations
The above equations are an example of Lotka-Volterra equations (see e.g. BIBREF18, BIBREF19). The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. For the sake of simplicity we have chosen the well-known linear-minus-bilinear or `logistic' form which dates back to Lotka BIBREF22 and is still commonly used in population dynamics. The linear term describes population growth, whereas the quadratic terms represent a saturation mechanism.
The main novelty of our approach is the introduction of the parameter $q$ in the saturation terms. This imbalance parameter is responsible for the internal symmetry breaking leading to language coexistence. It allows for the interpolation between two situations: when the saturation mechanism only involves the total population, i.e., $q=1$, and when the saturation mechanism acts separately on the populations of speakers of each language, $q=0$, which is the situation considered by Pinasco and Romanelli BIBREF11. Generic values of $q$ correspond to tunably imbalanced dynamics.
The last term in each of equations (DISPLAY_FORM2), () describes the language shift consisting of the conversions of single individuals from the less favoured language 2 to the more favoured language 1. In line with earlier studies BIBREF7, BIBREF11, BIBREF12, BIBREF13, conversions are triggered by binary interactions between individuals, so that the frequency of conversions is proportional to the product $X_1(t)X_2(t)$. The reduced conversion rate $C$ measures the difference of attractivenesses between the two languages.
For generic values of the parameters $q$ and $C$, the rate equations (DISPLAY_FORM2), () admit a unique stable fixed point. The dynamics converges exponentially fast to the corresponding stationary state, irrespective of initial conditions. There are two possible kinds of stationary states:
I. Consensus.
The solution
describes a consensus state where the unfavoured language 2 is extinct. The inverse relaxation times describing convergence toward the latter state are the opposites of the eigenvalues of the stability matrix associated with equations (DISPLAY_FORM2), (). The reader is referred to Appendix SECREF131 for details. These inverse relaxation times read
The above stationary solution is thus stable whenever $q+C>1$.
II. Coexistence.
The solution
describes a coexistence state where both languages survive forever. This stationary solution exists whenever $q+C<1$. It is always stable, as the inverse relaxation times read
Figure FIGREF9 shows the phase diagram of the model in the $q$–$C$ plane. There is a possibility of language coexistence only for $q<1$. The vertical axis ($q=0$) corresponds to the model considered by Pinasco and Romanelli BIBREF11, where the coexistence phase is maximal and extends up to $C=1$. As the parameter $q$ is increased, the coexistence phase shrinks until it disappears at the point $q=1$, corresponding to the balanced dynamics where the saturation mechanism involves the total population.
The model exhibits a continuous transition along the phase boundary between both phases ($q+C=1$). The number $X_2$ of speakers of the unfavoured language vanishes linearly as the phase boundary is approached from the coexistence phase (see (DISPLAY_FORM7)), whereas the relaxation time $1/\omega _2$ diverges linearly as the phase boundary is approached from both sides (see (DISPLAY_FORM5) and (DISPLAY_FORM8)).
For parameters along the phase boundary ($q+C=1$), the less attractive language still becomes extinct, albeit very slowly. Equations (DISPLAY_FORM2), () here yield the power-law relaxation laws
irrespective of initial conditions.
Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: @!START@$N$@!END@ competing languages
The above setting can be extended to the case of an arbitrary number $N$ of competing languages in a given area. Languages, numbered $i=1,\dots ,N$, are more or less favoured, depending on their attractivenesses $A_i$. The latter quantities are assumed to be quenched, i.e., fixed once for all. This non-trivial static profile of attractivenesses is responsible for conversions of single individuals from less attractive to more attractive languages.
Let $X(t)$ be the total population of the area under consideration at time $t$, and $X_i(t)$ be the number of speakers of language number $i=1,\dots ,N$. The dynamics of the model are defined by the rate equations
The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. The novel feature here is again the presence of the parameter $q$, which is responsible for imbalanced dynamics, allowing thus the possibility of language coexistence. The last term in (DISPLAY_FORM12) describes the conversions of single individuals. If language $i$ is more attractive than language $j$, there is a net positive conversion rate $C_{ji}=-C_{ij}$ from language $j$ to language $i$. For the sake of simplicity, we assume that these conversion rates depend linearly on the differences of attractivenesses between departure and target languages, i.e.,
in some consistent units.
Throughout this work we shall not pay any attention to the evolution of the whole population $X(t)$. We therefore reformulate the model in terms of the fractions
of speakers of the various languages, which sum up to unity:
The reduction to be derived below is quite natural in the present setting. It provides an example of the reduction of Lotka-Volterra equations to replicator equations, proposed in BIBREF23 (see also BIBREF19, BIBREF20, BIBREF21). In the present situation, for $q<1$, which is precisely the range of $q$ where there is a possibility of language coexistence, the dynamics of the fractions $x_i(t)$ obeys the following reduced rate equations, which can be derived from (DISPLAY_FORM12):
with
and where attractivenesses and conversion rates have been rescaled according to
In the following, we focus our attention onto the stationary states of the model, rather than on its dynamics. It is therefore legitimate to redefine time according to
so that equations (DISPLAY_FORM16) simplify to
The rate equations (DISPLAY_FORM20) for the fractions of speakers of the $N$ competing languages will be the starting point of further developments. The quantity $Z(t)$ can be alternatively viewed as a dynamical Lagrange multiplier ensuring that the dynamics conserves the sum rule (DISPLAY_FORM15). The above equations belong to the class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). Extensive studies of the dynamics of this class of equations have been made in mathematical biology, where the main focus has been on systematic classifications of fixed points and bifurcations in low-dimensional cases BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28.
From now on, we focus on the stationary state of the model for arbitrarily high values of the number $N$ of competing languages. The analysis of this goes as follows. The stationary values $x_i$ of the fractions of speakers are such that the right-hand sides of (DISPLAY_FORM20) vanish. For each language number $i$, there are two possibilities: either $x_i=0$, i.e., language $i$ gets extinct, or $x_i>0$, i.e., language $i$ survives forever. The non-zero fractions $x_i$ of speakers of surviving languages obey the coupled linear equations
where the parameter $Z$ is determined by expressing that the sum rule (DISPLAY_FORM15) holds in the stationary state. For generic values of model parameters, there is a unique stationary state, and the system relaxes exponentially fast to the latter, irrespective of its initial conditions. The uniqueness of the attractor is characteristic of the specific form of the rate equations (DISPLAY_FORM20), (DISPLAY_FORM21), with skew-symmetric conversion rates $c_{ij}$ (see (DISPLAY_FORM18)). This has been demonstrated explicitly in the case of two competing languages, studied in detail in Section SECREF1. The problem is however more subtle than it seems at first sight, as the number $K$ of surviving languages depends on model parameters in a non-trivial way.
Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: The case of equally spaced attractivenesses
It is useful to consider first the simple case where the (reduced) attractivenesses $a_i$ of the $N$ competing languages are equally spaced between 0 and some maximal value that we denote by $2g$. Numbering languages in order of decreasing attractivenesses, so that language 1 is the most attractive and language $N$ the least attractive, this reads
We have
The parameter $g$ is therefore the mean attractiveness.
The (reduced) conversion rates read
so that the fixed-point equations (DISPLAY_FORM21) take the form
Already in this simple situation the number $K$ of surviving languages depends on the mean attractiveness $g$ in a non-trivial way.
Consider first the situation where all languages survive ($K=N$). This is certainly true for $g=0$, where there are no conversions, so that the solution is simply $x_i=1/N$. There, all languages are indeed equally popular, as nothing distinguishes them. More generally, as long as all languages survive, the stationary solution obeying (DISPLAY_FORM26) reads
for $i=1,\dots ,N$. The above solution ceases to hold when the fraction of speakers of the least attractive language vanishes, i.e., $x_N=0$. This first extinction takes place for the threshold value
of the mean attractiveness $g$.
Consider now the general case where only $K$ among the $N$ languages survive. These are necessarily the $K$ most attractive ones, shown as red symbols in Figure FIGREF29.
In this situation, (DISPLAY_FORM26) yields
for $i=1,\dots ,K$. The linear relationship between the attractiveness $a_i$ of language $i$ and the stationary fraction $x_i$ of speakers of that language, observed in (DISPLAY_FORM27) and (DISPLAY_FORM30), is a general feature of the model (see Section SECREF34). The fraction $x_K$ of speakers of the least attractive of the surviving languages vanishes at the following threshold mean attractiveness:
for $K=2,\dots ,N$.
The following picture therefore emerges for the stationary state of $N$ competing languages with equally spaced attractivenesses. The number $K$ of surviving languages decreases as a function of the mean attractiveness $g$, from $K=N$ (all languages survive) near $g=0$ to $K=1$ (consensus) as very large $g$. Less attractive languages become extinct one by one as every single one of the thresholds (DISPLAY_FORM31) is traversed, so that
Figure FIGREF33 illustrates this picture for 5 competing languages. In each of the sectors defined in (DISPLAY_FORM32), the stationary fractions $x_i$ of speakers of the surviving languages are given by (DISPLAY_FORM30). They depend continuously on the mean attractiveness $g$, even though they are given by different expressions in different sectors. In particular, $x_i$ is flat, i.e., independent of $g$, in the sector where $K=2i-1$. The fraction $x_1$ of speakers of the most attractive language grows monotonically as a function of $g$, whereas all the other fractions of speakers eventually go to zero.
When the number of languages $N$ is large, the range of values of $g$ where the successive transitions take place is very broad. The threshold at which a consensus is reached, $g_{N,2}=N/2$, is indeed much larger than the threshold at which the least attractive language disappears, $g_{N,N}=1/(N-1)$. The ratio between these two extreme thresholds reads $N(N-1)/2$.
Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: The general case
We now turn to the general case of $N$ competing languages with arbitrary reduced attractivenesses $a_i$. Throughout the following, languages are numbered in order of decreasing attractivenesses, i.e.,
We shall be interested mostly in the stationary state of the model. As already mentioned above, the number $K$ of surviving languages depends on model parameters in a non-trivial way. The $K$ surviving languages are always the most attractive ones (see Figure FIGREF29). The fractions $x_i$ of speakers of those languages, obeying the fixed-point equations (DISPLAY_FORM21), can be written in full generality as
for $i=1,\dots ,K$, with
The existence of an explicit expression (DISPLAY_FORM36) for the solution of the fixed-point equations (DISPLAY_FORM21) in full generality is a consequence of their simple linear-minus-bilinear form, which also ensures the uniqueness of the attractor.
The number $K$ of surviving languages is the largest such that the solution (DISPLAY_FORM36) obeys $x_i>0$ for $i=1,\dots ,K$. Equivalently, $K$ is the largest integer in $1,\dots ,N$ such that
Every single one of the differences involved in the sum is positive, so that:
From now on, we model attractivenesses as independent random variables. More precisely, we set
where $w$ is the mean attractiveness, and the rescaled attractivenesses $\xi _i$ are positive random variables drawn from some continuous distribution $f(\xi )$ such that $\left\langle \xi \right\rangle =1$. For any given instance of the model, i.e., any draw of the $N$ random variables $\lbrace \xi _i\rbrace $, languages are renumbered in order of decreasing attractivenesses (see (DISPLAY_FORM35)).
For concreteness we assume that $f(0)$ is non-vanishing and that $f(\xi )$ falls off more rapidly than $1/\xi ^3$ at large $\xi $. These hypotheses respectively imply that small values of $\xi $ are allowed with non-negligible probability and ensure the convergence of the second moment $\left\langle \xi ^2\right\rangle =1+\sigma ^2$, where $\sigma ^2$ is the variance of $\xi $.
Some quantities of interest can be expressed in closed form for all language numbers $N$. One example is the consensus probability ${\cal P}$, defined as the probability of reaching consensus, i.e., of having $K=1$ (see (DISPLAY_FORM39)). This reads
We have
for all $N\ge 2$, where
is the cumulative distribution of $\xi $.
In forthcoming numerical and analytical investigations we use the following distributions:
We begin our exploration of the model by looking at the dynamics of a typical instance of the model with $N=10$ languages and a uniform distribution of attractivenesses with $w=0.3$. Figure FIGREF45 shows the time-dependent fractions of speakers of all languages, obtained by solving the rate equations (DISPLAY_FORM20) numerically, with the uniform initial condition $x_i(0)=1/10$ for all $i$. In this example there are $K=6$ surviving languages. The plotted quantities are observed to converge to their stationary values given by (DISPLAY_FORM36) for $i=1,\dots ,6$, and to zero for $i=7,\dots ,10$. They are ordered as the corresponding attractivenesses at all positive times, i.e., $x_1(t)>x_2(t)>\dots >x_N(t)$. Some of the fractions however exhibit a non-monotonic evolution. This is the case for $i=5$ in the present example.
Figure FIGREF48 shows the distribution $p_K$ of the number $K$ of surviving languages, for $N=10$ (top) and $N=40$ (bottom), and a uniform distribution of attractivenesses for four values of the product
This choice is motivated by the analysis of Appendix SECREF5. Each dataset is the outcome of $10^7$ draws of the attractiveness profile. The widths of the distributions $p_K$ are observed to shrink as $N$ is increased, in agreement with the expected $1/\sqrt{N}$ behavior stemming from the law of large numbers. The corresponding mean fractions $\left\langle K\right\rangle /N$ of surviving languages are shown in Table TABREF49 to converge smoothly to the asymptotic prediction (DISPLAY_FORM126), i.e.,
with $1/N$ corrections.
An overall picture of the dependence of the statistics of surviving languages on the mean attractiveness $w$ is provided by Figure FIGREF50, showing the mean number $\left\langle K\right\rangle $ of surviving languages against $w$, for $N=10$ and uniform and exponential attractiveness distributions. The plotted quantity decreases monotonically, starting from the value $\left\langle K\right\rangle =N$ in the absence of conversions ($w=0$), and converging to its asymptotic value $\left\langle K\right\rangle =1$ in the $w\rightarrow \infty $ limit, where consensus is reached with certainty. Its dependence on $w$ is observed to be steeper for the exponential distribution. These observations are corroborated by the asymptotic analysis of Appendix SECREF5. For the uniform distribution, (DISPLAY_FORM126) yields the scaling law $\left\langle K\right\rangle \approx (N/w)^{1/2}$. Concomitantly, the consensus probability becomes sizeable for $w\sim N$ (see (DISPLAY_FORM124)). For the exponential distribution, (DISPLAY_FORM130) yields the decay law $\left\langle K\right\rangle \approx 1/w$, irrespective of $N$, and the consensus probability is strictly independent of $N$ (see (DISPLAY_FORM128)).
Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses
As mentioned in the Introduction, different competing languages may coexist in distinct geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. The aim of this section is to provide a quantitative understanding of this scenario. We continue to use the approach and the formalism of Section SECREF2. We however take the liberty of adopting slightly different notations, as both sections are entirely independent.
We consider the dynamics of two competing languages in a structured territory comprising several distinct geographic areas. For definiteness, we assume that the population of each area is homogeneous. We restrict ourselves to the geometry of an array of $M$ areas, where individuals can only migrate along the links joining neighbouring areas, as shown in Figure FIGREF51. We assume for simplicity that the migration rates $\gamma $ between neighbouring areas are uniform, so that in the very long run single individuals eventually perform random walks across the territory. The relative attractivenesses of both competing languages are distributed inhomogeneously among the various areas, so that the net conversion rate $C_m$ from language 2 to language 1 depends on the area number $m$. Finally, in order to emphasise the effects of spatial inhomogeneity on their own, we simplify the model by neglecting imbalance and thus set $q=1$.
Let $X_m(t)$ and $Y_m(t)$ denote the respective numbers of speakers of language 1 and of language 2 in area number $m=1,\dots ,M$ at time $t$. The dynamics of the model is defined by the coupled rate equations
The extremal sites $m=1$ and $m=M$ have only one neighbour. The corresponding equations have to be modified accordingly. The resulting boundary conditions can be advantageously recast as
and similarly for other quantities. These are known as Neumann boundary conditions.
The total populations $P_m(t)=X_m(t)+Y_m(t)$ of the various areas obey
irrespective of the conversion rates $C_m$. As a consequence, in the stationary state all areas have the same population, which reads $P_m=1$ in our reduced units. The corresponding stability matrix is given in (DISPLAY_FORM137). The population profile $P_m(t)$ therefore converges exponentially fast to its uniform stationary value, with unit relaxation time ($\omega =1$).
From now on we assume, for simplicity, that the total population of each area is unity in the initial state. This property is preserved by the dynamics, i.e., we have $P_m(t)=1$ for all $m$ and $t$, so that the rate equations (DISPLAY_FORM52) simplify to
The rate equations (DISPLAY_FORM55) for the fractions $X_m(t)$ of speakers of language 1 in the various areas provide another example of the broad class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). The above equations are the starting point of the subsequent analysis. In the situation where language 1 is uniformly favoured or disfavoured, so that the conversion rates are constant ($C_m=C$), the above rate equations boil down to the discrete Fisher-Kolmogorov-Petrovsky-Piscounov (FKPP) equation BIBREF29, BIBREF30, which is known to exhibit traveling fronts, just as the well-known FKPP equation in the continuum BIBREF31, BIBREF32. In the present context, the focus will however be on stationary solutions on finite arrays, obeying
Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Two geographic areas
We begin with the case of two geographic areas connected by a single link. The problem is simple enough to allow for an explicit exposition of its full solution. The rate equations (DISPLAY_FORM55) become
Because of the migration fluxes, for any non-zero $\gamma $ it is impossible for any of the languages to become extinct in one area and survive in the other one. The only possibility is that of a uniform consensus, where one and the same language survives in all areas. The consensus state where language 1 survives is described by the stationary solution $X_1=X_2=1$. The corresponding stability matrix is
where $\mathop {{\rm diag}}(\dots )$ denotes a diagonal matrix (whose entries are listed), whereas ${\Delta }_2$ is defined in (DISPLAY_FORM135). The stability condition amounts to
Similarly, the consensus state where language 2 survives is described by the stationary solution $X_1=X_2=0$. The corresponding stability matrix is
The conditions for the latter to be stable read
Figure FIGREF66 shows the phase diagram of the model in the $C_1$–$C_2$ plane for $\gamma =1$. Region I1 is the consensus phase where language 1 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are positive), as its boundary (red curve) reads $C_1C_2+\gamma (C_1+C_2)=0$. Similarly, region I2 is the consensus phase where language 2 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are negative), as its boundary (blue curve) reads $C_1C_2-\gamma (C_1+C_2)=0$. The regions marked IIA and IIB are coexistence phases. These phases are located symmetrically around the line $C_1+C_2=0$ (black dashed line) where none of the languages is globally favoured. There, the fractions $X_1$ and $X_2$ of speakers of language 1 in both areas vary continuously between zero on the blue curve and unity on the red one, according to
with
We have therefore
all over the coexistence phases IIA and IIB. The right-hand-side equals 0 on the blue curve, 1 on the black dashed line, and 2 on the red curve.
Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: @!START@$M$@!END@ geographical areas
From now on we consider the general situation of $M$ geographic areas, as shown in Figure FIGREF51. The basic properties of the model can be inferred from the case of two areas, studied in section SECREF57. In full generality, because of migration fluxes, it is impossible for any of the languages to become extinct in some areas and survive in some other ones. The only possibility is that of a uniform consensus, where one and the same language survives in all areas.
The consensus state where language 1 survives is described by the uniform stationary solution where $X_m=1$ for all $m=1,\dots ,M$. The corresponding stability matrix is
Similarly, the consensus state where language 2 survives corresponds to the stationary solution where $X_m=0$ for all $m=1,\dots ,M$. The corresponding stability matrix is
These expressions respectively generalise (DISPLAY_FORM59) and (DISPLAY_FORM61).
If all the conversion rates $C_m$ vanish, both the above matrices read $-\gamma {\Delta }_M$, whose spectrum comprises one vanishing eigenvalue (see (DISPLAY_FORM136)). In the regime where all the conversion rates $C_m$ are small with respect to $\gamma $, perturbation theory tells us that the largest eigenvalues of ${S}_M^{(0)}$ and ${S}_M^{(1)}$ respectively read $\overline{C}$ and $-\overline{C}$, to leading order, where
We therefore predict that the average conversion rate $\overline{C}$ determines the fate of the system in the regime where conversion rates are small with respect to $\gamma $. If language 1 is globally favoured, i.e., $\overline{C}>0$, the system reaches the consensus where language 1 survives, and vice versa.
In the generic situation where the conversion rates $C_m$ are comparable to $\gamma $, their dispersion around their spatial average $\overline{C}$ broadens the spectra of the matrices ${S}_M^{(1)}$ and ${S}_M^{(0)}$. As a consequence, the condition $\overline{C}>0$ (resp. $\overline{C}<0$) is necessary, albeit not sufficient, for the consensus where language 1 (resp. language 2) survives to be stable.
In the following we shall successively consider ordered attractiveness profiles in Section SECREF71 and random ones in Section SECREF84.
Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Ordered attractiveness profiles
This section is devoted to a simple situation where the attractiveness profiles of both languages are ordered spatially. More specifically, we consider the case where language 1 is favoured in the $K$ first (i.e., leftmost) areas, whereas language 2 is favoured in the $L$ last (i.e., rightmost) areas, with $K\ge L$ and $K+L=M$. For the sake of simplicity, we choose to describe this situation by conversion rates that have unit magnitude, as shown in Figure FIGREF73:
The symmetric situation where $M$ is even and $K=L=M/2$, so that $\overline{C}=0$, can be viewed as a generalisation of the case of two geographic areas, studied in Section SECREF57, for $C_1+C_2=0$, i.e., along the black dashed line of Figure FIGREF66. Both languages play symmetric roles, so that no language is globally preferred, and no consensus can be reached. As a consequence, both languages survive everywhere, albeit with non-trivial spatial profiles, which can be thought of as avatars of the FKPP traveling fronts mentioned above, rendered stationary by being pinned by boundary conditions. The upper panel of Figure FIGREF76 shows the stationary fraction $X_m$ of speakers of language 1 against area number, for $M=20$ (i.e., $K=L=10$) and several $\gamma $. The abscissa $m-1/2$ is chosen in order to have a symmetric plot. As one might expect, each language is preferred in the areas where it is favoured, i.e., we have $X_m>1/2$ for $m=1,\dots ,K$, whereas $X_m<1/2$ for $m=K+1,\dots ,M$. Profiles get smoother as the migration rate $\gamma $ is increased. The width $\xi $ of the transition region is indeed expected to grow as
This scaling law is nothing but the large $\gamma $ behaviour of the exact dispersion relation
(see (DISPLAY_FORM148)) between $\gamma $ and the decay rate $\mu $ such that either $X_m$ or $1-X_m$ falls off as ${\rm e}^{\pm m\mu }$, with the natural identification $\xi =1/\mu $.
The asymmetric situation where $K>L$, so that $\overline{C}=(K-L)/M>0$, implying that language 1 is globally favoured, is entirely different. The system indeed reaches a consensus state where the favoured language survives, whenever the migration rate $\gamma $ exceeds some threshold $\gamma _c$. This threshold, corresponding to the consensus state becoming marginally stable, only depends on the integers $K$ and $L$. It is derived in Appendix SECREF6 and given by the largest solution of (DISPLAY_FORM153).
This is illustrated in the lower panel of Figure FIGREF76, showing $X_m$ against $m-1/2$ for $K=12$ and $L=8$, and the same values of $\gamma $ as on the upper panel. The corresponding threshold reads $\gamma _c=157.265$. The whole profile shifts upwards while it broadens as $\gamma $ is increased. It tends uniformly to unity as $\gamma $ tends to $\gamma _c$, demonstrating the continuous nature of the transition where consensus is formed.
The threshold migration rate $\gamma _c$ assumes a scaling form in the regime where $K$ and $L$ are large and comparable. Setting
so that the excess fraction $f$ identifies with the average conversion rate $\overline{C}$, the threshold rate $\gamma _c$ grows quadratically with the system size $M$, according to
where $g(f)$ is the smallest positive solution of the implicit equation
which is a rescaled form of (DISPLAY_FORM153).
The quadratic growth law (DISPLAY_FORM78) is a consequence of the diffusive nature of migrations. The following limiting cases deserve special mention.
For $f\rightarrow 0$, i.e., $K$ and $L$ relatively close to each other ($K-L\ll M$), we have
yielding to leading order
For $f\rightarrow 1$, i.e., $L\ll K$, we have $g(f)\approx \pi /(4(1-f))$, up to exponentially small corrections, so that
The situation considered in the lower panel of Figure FIGREF76, i.e., $M=20$, $K=12$ and $L=8$, corresponds to $f=1/5$, hence $g=0.799622814\dots $, so that
This scaling result predicts $\gamma _c\approx 156.397$ for $M=20$, a good approximation to the exact value $\gamma _c=157.265$.
Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Random attractiveness profiles
We now consider the situation of randomly disordered attractiveness profiles. The conversion rates $C_m$ are modelled as independent random variables drawn from some symmetric distribution $f(C)$, such that $\left\langle C_m\right\rangle =0$ and $\left\langle C_m^2\right\rangle =w^2$.
The first quantity we will focus on is the consensus probability ${\cal P}$. It is clear from a dimensional analysis of the rate equations (DISPLAY_FORM56) that ${\cal P}$ depends on the ratio $\gamma /w$, the system size $M$, and the distribution $f(C)$. Furthermore, ${\cal P}$ is expected to increase with $\gamma /w$. It can be estimated as follows in the limiting situations where $\gamma /w$ is either very small or very large.
In the regime where $\gamma \ll w$ (e.g. far from the center in Figure FIGREF66), conversion effects dominate migration effects. There, a consensus where language 1 (resp. language 2) survives can only be reached if all conversion rates $C_m$ are positive (resp. negative). The total consensus probability thus scales as
Consensus is therefore highly improbable in this regime. In other words, coexistence of both languages is overwhelmingly the rule.
In the opposite regime where $\gamma \gg w$ (e.g. in the vicinity of the center in Figure FIGREF66), migration effects dominate conversion effects. There, we have seen in Section SECREF67 that the average conversion rate defined in (DISPLAY_FORM70) essentially determines the fate of the system. If language 1 is globally favoured, i.e., $\overline{C}>0$, then the system reaches the uniform consensus where language 1 survives, and vice versa. Coexistence is therefore rare in this regime, as it requires $\overline{C}$ to be atypically small. The probability ${\cal Q}$ for this to occur, to be identified with $1-{\cal P}$, has been given a precise definition in Appendix SECREF6 by means of the expansion (DISPLAY_FORM143) of $D_M=\det {S}_M^{(1)}$ as a power series in the $C_m$, and estimated within a simplified Gaussian setting. In spite of the heuristic character of its derivation, the resulting estimate (DISPLAY_FORM147) demonstrates that the consensus probability scales as
all over the regime where the ratio $\gamma /w$ and the system size $M$ are both large. Furthermore, taking (DISPLAY_FORM147) literally, we obtain the following heuristic prediction for the finite-size scaling function:
The scaling result (DISPLAY_FORM86) shows that the scale of the migration rate $\gamma $ which is relevant to describe the consensus probability for a typical disordered profile of attractivenesses reads
This estimate grows less rapidly with $M$ than the corresponding threshold for ordered profiles, which obeys a quadratic growth law (see (DISPLAY_FORM78)). The exponent $3/2$ of the scaling law (DISPLAY_FORM88) can be put in perspective with the anomalous scaling of the localisation length in one-dimensional Anderson localisation near band edges. There is indeed a formal analogy between the stability matrices of the present problem and the Hamiltonian of a tight-binding electron in a disordered potential, with the random conversion rates $C_m$ replacing the disordered on-site energies. For the tight-binding problem, the localisation length is known to diverge as $\xi \sim 1/w^2$ in the bulk of the spectrum, albeit only as $\xi \sim 1/w^{2/3}$ in the vicinity of band edges BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37. Replacing $\xi $ by the system size $M$ and remembering that $w$ stands for $w/\gamma $, we recover (DISPLAY_FORM88). The exponent $3/2$ is therefore nothing but the inverse of the exponent $2/3$ of anomalous band-edge localisation.
Figure FIGREF89 shows a finite-size scaling plot of the consensus probability ${\cal P}$ against $x=\gamma /M^{3/2}$. Data correspond to arrays of length $M=20$ with uniform and Gaussian distributions of conversion rates with $w=1$. Each data point is the outcome of $10^6$ independent realisations. The thin black curve is a guide to the eye, suggesting that the finite-size scaling function $\Phi $ is universal, i.e., independent of details of the conversion rate distribution. It has indeed been checked that the weak residual dependence of data points on the latter distribution becomes even smaller as $M$ is further increased. The full green curve shows the heuristic prediction (DISPLAY_FORM87), providing a semi-quantitative picture of the finite-size scaling function. For instance, consensus is reached with probability ${\cal P}=1/2$ and ${\cal P}=2/3$ respectively for $x\approx 0.18$ and $x\approx 0.33$, according to actual data, whereas (DISPLAY_FORM87) respectively predicts $x=1/\sqrt{12}=0.288675\dots $ and $x=1/2$.
Besides the value of the consensus probability ${\cal P}$, the next question is what determines whether or not the system reaches consensus. In Section SECREF67 it has been demonstrated that the average conversion rate $\overline{C}$ defined in (DISPLAY_FORM70) essentially determines the fate of the system in the regime where migration effects dominate conversion effects. It has also been shown that the consensus denoted by I1, where language 1 survives, can only be stable for $\overline{C}>0$, whereas the consensus denoted by I2, where language 2 survives, can only be stable for $\overline{C}<0$. The above statements are made quantitative in Figure FIGREF90, showing the probability distribution of the average conversion rate $\overline{C}$, for a Gaussian distribution of conversion rates with $w=1$. The total (i.e., unconditioned) distribution (black curves) is Gaussian. Red and blue curves show the distributions conditioned on consensus. They are indeed observed to live entirely on $\overline{C}>0$ for I1 and on $\overline{C}<0$ for I2. Finally, the distributions conditioned on coexistence (green curves, denoted by II) exhibit narrow symmetric shapes around the origin. Values of the migration rate $\gamma $ are chosen so as to have three partial histograms with equal weights, i.e., a consensus probability ${\cal P}=2/3$. This fixes $\gamma \approx 0.351$ for $M=2$ (top) and $\gamma \approx 10.22$ for $M=10$ (bottom).
Discussion
An area of interest that is common to both physicists and linguists concerns the evolution of competing languages. It was long assumed that such competition would result in the dominance of one language above all its competitors, until some recent work hinted that coexistence might be possible under specific circumstances. We argue here that coexistence of two or more competing languages can result from two symmetry-breaking mechanisms – due respectively to imbalanced internal dynamics and spatial heterogeneity – and engage in a quantitative exploration of the circumstances which lead to this coexistence. In this work, both symmetry-breaking scenarios are dealt with on an equal footing.
In the first case of competing languages in a single geographical area, our introduction of an interpolation parameter $q$, which measures the amount of imbalance in the internal dynamics, turns out to be crucial for the investigation of language coexistence. It is conceptually somewhat subtle, since it appears only in the saturation terms in the coupled logistic equations used here to describe language competition; in contrast to the conversion terms (describing language shift from a less to a more favoured language), its appearance is symmetric with respect to both languages. For multiply many competing languages, the ensuing rate equations for the fractions of speakers are seen to bear a strong resemblance to a broad range of models used in theoretical ecology, including Lotka-Volterra or predator-prey systems.
We first consider the case where the $N$ languages in competition in a single area have equally spaced attractivenesses. This simple situation allows for an exact characterisation of the stationary state. The range of attractivenesses is measured by the mean attractiveness $g$. As this parameter is increased, the number $K$ of surviving languages decreases progressively, as the least favoured languages successively become extinct at threshold values of $g$. Importantly, the range of values of $g$ between the start of the disappearances and the appearance of consensus grows proportionally to $N^2$. There is therefore a substantial amount of coexistence between languages that are significantly attractive.
In the general situation, where the attractivenesses of the competing languages are modelled as random variables with an arbitrary distribution, the outcomes of numerical studies at finite $N$ are corroborated by a detailed asymptotic analysis in the regime of large $N$. One of the key results is that the quantity $W=Nw$ (the product of the number of languages $N$ with the mean attractiveness $w$) determines many quantities of interest, including the mean fraction $R=\left\langle K\right\rangle /N$ of surviving languages. The relation between $W$ and $R$ is however non-universal, as it depends on the full attractiveness distribution. This non-universality is most prominent in the regime where the mean attractiveness is large, so that only the few most favoured languages survive in the stationary state. The number of such survivors is found to obey a scaling law, whose non-universal critical exponent is dictated by the specific form of the attractiveness distribution near its upper edge.
As far as symmetry breaking via spatial heterogeneity is concerned, we consider the paradigmatic case of two competing languages in a linear array of $M$ geographic areas, whose neighbours are linked via a uniform migration rate $\gamma $. In the simplest situation of two areas, we determine the full phase diagram of the model as a function of $\gamma $ as well as the conversion rates ruling language shift in each area. This allows us to associate different regions of phase space with either consensus or coexistence. Our analysis is then generalised to longer arrays of $M$ linked geographical regions. We first consider ordered attractiveness profiles, where language 1 is favoured in the $K$ leftmost areas, while language 2 is favoured in the $L$ rightmost ones. If the two blocks are of equal size so that no language is globally preferred, coexistence always results; however, the spatial profiles of the language speakers themselves are rather non-trivial. For blocks of unequal size, there is a transition from a situation of coexistence at low migration rates to a situation of uniform consensus at high migration rates, where the language favoured in the larger block is the only survivor in all areas. The critical migration rate at this transition grows as $M^2$. We next investigate disordered attractiveness profiles, where conversion rates are modelled as random variables. There, the probability of observing a uniform consensus is given by a universal scaling function of $x=\gamma /(M^{3/2}w)$, where $w$ is the width of the symmetric distribution of conversion rates.
The ratio between migration and conversion rates beyond which there is consensus – either with certainty or with a sizeable probability – grows with the number of geographic areas as $M^2$ for ordered profiles of attractivenesses, and as $M^{3/2}$ for disordered ones. The first exponent is a consequence of the diffusive nature of migrations, whereas the second one has been derived in Appendix SECREF134 and related to anomalous band-edge scaling in one-dimensional Anderson localisation. If geographical areas were arranged according to a more complex geometric structure, these exponents would respectively read $2d/d_s$ and $(4-d_s)/(2d_s)$, with $d$ and $d_s$ being the fractal and spectral dimensions of the underlying structure (see BIBREF38, BIBREF39, and BIBREF40, BIBREF41 for reviews).
Finally, we remark on another striking formal analogy – that between the rate equations (DISPLAY_FORM20) presented here, and those of a spatially extended model of competitive dynamics BIBREF42, itself inspired by a model of interacting black holes BIBREF43. In the latter, the non-trivial patterns of survivors on various networks and other geometrical structures were a particular focus of investigation, and led to the unearthing of universal behaviour. We believe that a network model of competing languages which combines both the symmetry-breaking scenarios discussed in this paper, so that every node corresponds to a geographical area with its own imbalanced internal dynamics, might lead to the discovery of similar universalities.
AM warmly thanks the Leverhulme Trust for the Visiting Professorship that funded this research, as well as the Faculty of Linguistics, Philology and Phonetics at the University of Oxford, for their hospitality.
Both authors contributed equally to the present work, were equally involved in the preparation of the manuscript, and have read and approved the final manuscript.
Asymptotic analysis for a large number of competing languages in a single area
This Appendix is devoted to an analytical investigation of the statistics of surviving languages in a single geographic area, in the regime where the numbers $N$ of competing languages is large.
The properties of the attractiveness distribution of the languages are key to determining whether coexistence or consensus will prevail. In particular the transition to consensus depends critically, and non-universally, on the way in which the attractiveness distribution decays, as will be shown below.
Statistical fluctuations between various instances of the model become negligible for large $N$, so that sharp (i.e., self-averaging) expressions can be obtained for many quantities of interest.
Let us begin with the simplest situation where all languages survive. When the number $N$ of competing languages is large, the condition for this to occur assumes a simple form. Consider the expression (DISPLAY_FORM36) for $x_N$. The law of large numbers ensures that the sum $S$ converges to
whereas $a_N$ is relatively negligible. The condition that all the $N$ competing languages survive therefore takes the form of a sharp inequality at large $N$, i.e.,
All over this regime, the expression for $x_N$ simplifies to
The above analysis can be extended to the general situation where the numbers $N$ of competing languages and $K$ of surviving ones are large and comparable, with the fraction of surviving languages,
taking any value in the range $0<R<1$.
The rescaled attractiveness of the least favoured surviving language, namely
turns out to play a key role in the subsequent analysis. Let us introduce for further reference the truncated moments ($k=0,1,2$)
First of all, the relationship between $R$ and $\eta $ becomes sharp in the large-$N$ regime. We have indeed
The limits of all quantities of interest can be similarly expressed in terms of $\eta $. We have for instance
for the sum introduced in (DISPLAY_FORM37). The marginal stability condition, namely that language number $K$ is at the verge of becoming extinct, translates to
The asymptotic dependence of the fraction $R$ of surviving languages on the rescaled mean attractiveness $W$ is therefore given in parametric form by (DISPLAY_FORM97) and (DISPLAY_FORM99). The identity
demonstrates that $R$ is a decreasing function of $W$, as it should be.
When the parameter $W$ reaches unity from above, the model exhibits a continuous transition from the situation where all languages survive. The parameter $\eta $ vanishes linearly as
with unit prefactor, irrespective of the attractiveness distribution. The fraction of surviving languages departs linearly from unity, according to
In the regime where $W\gg 1$, the fraction $R$ of surviving languages is expected to fall off to zero. As a consequence of (DISPLAY_FORM97), $R\ll 1$ corresponds to the parameter $\eta $ being close to the upper edge of the attractiveness distribution $f(\xi )$. This is to be expected, as the last surviving languages are the most attractive ones. As a consequence, the form of the relationship between $W$ and $R$ for $W\gg 1$ is highly non-universal, as it depends on the behavior of the distribution $f(\xi )$ near its upper edge. It turns out that the following two main classes of attractiveness distributions have to be considered.
Class 1: Power law at finite distance.
Consider the situation where the distribution $f(\xi )$ has a finite upper edge $\xi _0$, and either vanishes or diverges as a power law near this edge, i.e.,
The exponent $\alpha $ is positive. The density $f(\xi )$ diverges near its upper edge $\xi _0$ for $0<\alpha <1$, whereas it vanishes near $\xi _0$ for $\alpha >1$, and takes a constant value $f(\xi _0)=A$ for $\alpha =1$.
In the relevant regime where $\eta $ is close to $\xi _0$, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to
Eliminating $\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:
In terms of the original quantities $K$ and $w$, the above result reads
Setting $K=1$ in this estimate, we predict that the consensus probability ${\cal P}$ becomes appreciable when
Class 2: Power law at infinity.
Consider now the situation where the distribution extends up to infinity, and falls off as a power law, i.e.,
The exponent $\beta $ is larger than 2, in order for the first two moments of $\xi $ to be convergent.
In the relevant regime where $\eta $ is large, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to
Eliminating $\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:
In terms of the original quantities $K$ and $w$, the above result reads
Setting $K=1$ in this estimate, we predict that the consensus probability ${\cal P}$ becomes appreciable when
We now summarise the above discussion. In the regime where $W\gg 1$, the fraction $R$ of surviving languages falls off as a power law of the form
where the positive exponent $\lambda $ varies continuously, according to whether the distribution of attractivenesses extends up to a finite distance or infinity (see (DISPLAY_FORM106), (DISPLAY_FORM112)):
In the marginal situation between both classes mentioned above, comprising e.g. the exponential distribution, the decay exponent sticks to its borderline value
The decay law $R\sim 1/W$ might however be affected by logarithmic corrections.
Another view of the above scaling laws goes as follows. When the number of languages $N$ is large, the number of surviving languages decreases from $K=N$ to $K=1$ over a very broad range of mean attractivenesses. The condition for all languages to survive (see (DISPLAY_FORM92)) sets the beginning of this range as
The occurrence of a sizeable consensus probability ${\cal P}$ sets the end of this range as
where the exponent $\mu >-1/2$ varies continuously, according to (see (DISPLAY_FORM108), (DISPLAY_FORM114)):
In the marginal situation between both classes, the above exponent sticks to its borderline value
The extension of the dynamical range, defined as the ratio between both scales defined above, diverges as
We predict in particular a linear divergence for the exponential distribution ($\mu =0$) and a quadratic divergence for the uniform distribution ($\mu =1$). This explains the qualitative difference observed in Figure FIGREF50. The slowest growth of the dynamical range is the square-root law observed for distributions falling off as a power-law with $\beta \rightarrow 2$, so that $\mu =-1/2$.
To close, let us underline that most of the quantities met above assume simple forms for the uniform and exponential distributions (see (DISPLAY_FORM44)).
Uniform distribution.
The consensus probability (see (DISPLAY_FORM42)) reads
For large $N$, this becomes ${\cal P}\approx \exp (-N/(2w))$, namely a function of the ratio $w/N$, in agreement with (DISPLAY_FORM119) and (DISPLAY_FORM120), with exponent $\mu =1$, since $\alpha =1$.
The truncated moments read
We thus obtain
with exponent $\lambda =1/2$, in agreement with (DISPLAY_FORM106) and (DISPLAY_FORM116) for $\alpha =1$.
Exponential distribution.
The consensus probability reads
irrespective of $N$, in agreement with (DISPLAY_FORM119), with exponent $\mu =0$ (see (DISPLAY_FORM121)).
The truncated moments read
We thus obtain
with exponent $\lambda =1$, in agreement with (DISPLAY_FORM117).
Stability matrices and their spectra ::: Generalities
This Appendix is devoted to stability matrices and their spectra. Let us begin by reviewing some general background (see e.g. BIBREF44 for a comprehensive overview). Consider an autonomous dynamical system defined by a vector field ${E}({x})$ in $N$ dimensions, i.e., by $N$ coupled first-order equations of the form
with $m,n=1,\dots ,N$, where the right-hand sides depend on the dynamical variables $\lbrace x_n(t)\rbrace $ themselves, but not explicitly on time.
Assume the above dynamical system has a fixed point $\lbrace x_m\rbrace $, such that $E_m\lbrace x_n\rbrace =0$ for all $m$. Small deviations $\lbrace \delta x_m(t)\rbrace $ around the fixed point $\lbrace x_m\rbrace $ obey the linearised dynamics given by the stability matrix ${S}$, i.e., the $N\times N$ matrix defined by
where right-hand sides are evaluated at the fixed point. The fixed point is stable, in the strong sense that small deviations fall off exponentially fast to zero, if all eigenvalues $\lambda _a$ of ${S}$ have negative real parts. In this case, if all the $\lambda _a$ are real, their opposites $\omega _a=-\lambda _a>0$ are the inverse relaxation times of the linearised dynamics. In particular, the opposite of the smallest eigenvalue, simply denoted by $\omega $, characterises exponential convergence to the fixed point for a generic initial state. If some of the $\lambda _a$ have non-zero imaginary parts, convergence is oscillatory.
The analysis of fixed points and bifurcations in low-dimensional Lotka-Volterra and replicator equations has been the subject of extensive investigations BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28 (see also BIBREF19, BIBREF20, BIBREF21).
Stability matrices and their spectra ::: Array models
The remainder of this Appendix is devoted to the stability matrices involved in the array models considered in Section SECREF3, for an arbitrarily large number $M$ of geographical areas. All those stability matrices are related to the symmetric $M\times M$ matrix
representing (minus) the Laplacian operator on a linear array of $M$ sites, with Neumann boundary conditions. References BIBREF45, BIBREF46 provide reviews on the Laplacian and related operators on graphs.
The eigenvalues $\lambda _a$ of ${\Delta }_M$ and the corresponding normalised eigenvectors ${\phi }_a$, such that ${\Delta }_M{\phi }_a=\lambda _a{\phi }_a$ and ${\phi }_a\cdot {\phi }_b=\delta _{ab}$, read
($a=0,\dots ,M-1$). The vanishing eigenvalue $\lambda _0=0$ corresponds to the uniform eigenvector $\phi _{0,m}=1/\sqrt{M}$.
Let us begin by briefly considering the simple example of the stability matrix
of the rate equations (DISPLAY_FORM54) for the total populations $P_m(t)$. Its eigenvalues are $-1-\gamma \lambda _a$. The smallest of them is $-1$, so that the inverse relaxation time is given by $\omega =1$, as announced below (DISPLAY_FORM54).
Let us now consider the stability matrices
respectively defined in (DISPLAY_FORM68) and (DISPLAY_FORM69), and corresponding to both uniform consensus states for an arbitrary profile of conversion rates $C_m$. The ensuing stability conditions have been written down explicitly in (DISPLAY_FORM60) and (DISPLAY_FORM62) for $M=2$. It will soon become clear that it is virtually impossible to write them down for an arbitrary size $M$. Some information can however be gained from the calculation of the determinants of the above matrices. They only differ by a global sign change of all the conversion rates $C_m$, so that it is sufficient to consider ${S}_M^{(1)}$. It is a simple matter to realise that its determinant reads
where $u_m$ is a generalised eigenvector solving the following Cauchy problem:
with initial conditions $u_0=u_1=1$. We thus obtain recursively
and so on. The expression (DISPLAY_FORM141) for $D_2$ agrees with the second of the conditions (DISPLAY_FORM60) and with the equation of the red curve in Figure FIGREF66, as should be. The expression () for $D_3$ demonstrates that the complexity of the stability conditions grows rapidly with the system size $M$.
Stability matrices and their spectra ::: Array models ::: Random arrays
In the case of random arrays, considered in Section SECREF84, the conversion rates $C_m$ are independent random variables such that $\left\langle C_m\right\rangle =0$ and $\left\langle C_m^2\right\rangle =w^2$.
The regime of most interest is where the conversion rates $C_n$ are small with respect to $\gamma $. In this regime, the determinant $D_M$ can be expanded as a power series in the conversion rates. The $u_m$ solving the Cauchy problem (DISPLAY_FORM140) are close to unity. Setting
where the $u_m^{(1)}$ are linear and the $u_m^{(2)}$ quadratic in the $C_n$, we obtain after some algebra
where
are respectively linear and quadratic in the $C_n$. We have
In Section SECREF84 we need an estimate of the probability ${\cal Q}$ that $\overline{C}=X/M$ is atypically small. Within the present setting, it is natural to define the latter event as $\left|X\right|<\left|Y\right|$. The corresponding probability can be worked out proviso we make the ad hoc simplifying assumptions – that definitely do not hold in the real world – that $X$ and $Y$ are Gaussian and independent. Within this framework, the complex Gaussian random variable
has an isotropic density in the complex plane. We thus obtain
Stability matrices and their spectra ::: Array models ::: Ordered arrays
The aim of this last section is to investigate the spectrum of the stability matrix ${S}_M^{(1)}$ associated with the ordered profile of conversion rates given by (DISPLAY_FORM72).
In this case, the generalised eigenvector $u_m$ solving the Cauchy problem (DISPLAY_FORM140) can be worked out explicitly. We have $C_m=1$ for $m=1,\dots ,K$, and therefore $u_m=a{\rm e}^{m\mu }+b{\rm e}^{-m\mu }$, where $\mu >0$ obeys the dispersion relation
The initial conditions $u_0=u_1=1$ fix $a$ and $b$, and so
Similarly, we have $C_m=-1$ for $m=K+\ell $, with $\ell =1,\dots ,L$, and therefore $u_m=\alpha {\rm e}^{{\rm i}\ell q}+\beta {\rm e}^{-{\rm i}\ell q}$, where $0<q<\pi $ obeys the dispersion relation
Matching both solutions for $m=K$ and $K+1$ fixes $\alpha $ and $\beta $, and so
Inserting the latter result into (DISPLAY_FORM139), we obtain the following expression for the determinant of ${S}_M^{(1)}$, with $M=K+L$:
The vanishing of the above expression, i.e.,
signals that one eigenvalue of the stability matrix ${S}^{(1)}$ vanishes. In particular, the consensus state where language 1 survives becomes marginally stable at the threshold migration rate $\gamma _c$, where the largest eigenvalue of ${S}^{(1)}$ vanishes. Equation (DISPLAY_FORM153) amounts to a polynomial equation of the form $P_{K,L}(\gamma )=0$, where the polynomial $P_{K,L}$ has degree $K+L-1=M-1$. All its zeros are real, and $\gamma _c$ is the largest of them. The first of these polynomials read | Unanswerable |
1fdcc650c65c11908f6bde67d5052087245f3dde | 1fdcc650c65c11908f6bde67d5052087245f3dde_0 | Q: Do they report results only on English data?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | Unanswerable |
abad9beb7295d809d7e5e1407cbf673c9ffffd19 | abad9beb7295d809d7e5e1407cbf673c9ffffd19_0 | Q: Do they propose any further additions that could be made to improve generalisation to unseen speakers?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | Yes |
265c9b733e4dfffb76acfbade4c0c9b14d3ccde1 | 265c9b733e4dfffb76acfbade4c0c9b14d3ccde1_0 | Q: What are the characteristics of the dataset?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male), data was aligned at the phone-level, 121fps with a 135 field of view, single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames) |
0f928732f226185c76ad5960402e9342c0619310 | 0f928732f226185c76ad5960402e9342c0619310_0 | Q: What type of models are used for classification?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | feedforward neural networks (DNNs), convolutional neural networks (CNNs) |
11c5b12e675cfd8d1113724f019d8476275bd700 | 11c5b12e675cfd8d1113724f019d8476275bd700_0 | Q: Do they compare to previous work?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | No |
d24acc567ebaec1efee52826b7eaadddc0a89e8b | d24acc567ebaec1efee52826b7eaadddc0a89e8b_0 | Q: How many instances does their dataset have?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | 10700 |
2d62a75af409835e4c123a615b06235a352a67fe | 2d62a75af409835e4c123a615b06235a352a67fe_0 | Q: What model do they use to classify phonetic segments?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | feedforward neural networks, convolutional neural networks |
fffbd6cafef96eeeee2f9fa5d8ab2b325ec528e6 | fffbd6cafef96eeeee2f9fa5d8ab2b325ec528e6_0 | Q: How many speakers do they have in the dataset?
Text: Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .
Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.
Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ).
Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images.
Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.
Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.
Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.
Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images.
Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset.
Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances.
Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.
For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples.
Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.
The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept.
Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.
This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512.
Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.
When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.
Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.
Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation).
Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal.
Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. | 58 |
c034f38a570d40360c3551a6469486044585c63c | c034f38a570d40360c3551a6469486044585c63c_0 | Q: How better is proposed method than baselines perpexity wise?
Text: Introduction
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
Related Work
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
Model
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
Model ::: Hierarchical Attention
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
Model ::: Emotion Encoder
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
Model ::: Decoding
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
Evaluation
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
Evaluation ::: Datasets
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
Evaluation ::: Baselines and Implementation
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
Evaluation ::: Evaluation Metrics
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
Evaluation ::: Evaluation Metrics ::: Human evaluation setup
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
Evaluation ::: Results
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
Evaluation ::: Results ::: Case Study
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
Conclusion and Future Work
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33. | Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set. |
9cbea686732b5b85f77868ca47d2f93cf34516ed | 9cbea686732b5b85f77868ca47d2f93cf34516ed_0 | Q: How does the multi-turn dialog system learns?
Text: Introduction
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
Related Work
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
Model
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
Model ::: Hierarchical Attention
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
Model ::: Emotion Encoder
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
Model ::: Decoding
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
Evaluation
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
Evaluation ::: Datasets
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
Evaluation ::: Baselines and Implementation
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
Evaluation ::: Evaluation Metrics
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
Evaluation ::: Evaluation Metrics ::: Human evaluation setup
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
Evaluation ::: Results
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
Evaluation ::: Results ::: Case Study
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
Conclusion and Future Work
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33. | we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution |
6aee16c4f319a190c2a451c1c099b66162299a28 | 6aee16c4f319a190c2a451c1c099b66162299a28_0 | Q: How is human evaluation performed?
Text: Introduction
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
Related Work
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
Model
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
Model ::: Hierarchical Attention
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
Model ::: Emotion Encoder
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
Model ::: Decoding
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
Evaluation
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
Evaluation ::: Datasets
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
Evaluation ::: Baselines and Implementation
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
Evaluation ::: Evaluation Metrics
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
Evaluation ::: Evaluation Metrics ::: Human evaluation setup
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
Evaluation ::: Results
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
Evaluation ::: Results ::: Case Study
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
Conclusion and Future Work
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33. | (1) grammatical correctness, (2) contextual coherence, (3) emotional appropriateness |
4d4b9ff2da51b9e0255e5fab0b41dfe49a0d9012 | 4d4b9ff2da51b9e0255e5fab0b41dfe49a0d9012_0 | Q: Is some other metrics other then perplexity measured?
Text: Introduction
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
Related Work
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
Model
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
Model ::: Hierarchical Attention
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
Model ::: Emotion Encoder
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
Model ::: Decoding
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
Evaluation
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
Evaluation ::: Datasets
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
Evaluation ::: Baselines and Implementation
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
Evaluation ::: Evaluation Metrics
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
Evaluation ::: Evaluation Metrics ::: Human evaluation setup
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
Evaluation ::: Results
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
Evaluation ::: Results ::: Case Study
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
Conclusion and Future Work
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33. | No |
180047e1ccfc7c98f093b8d1e1d0479a4cca99cc | 180047e1ccfc7c98f093b8d1e1d0479a4cca99cc_0 | Q: What two baseline models are used?
Text: Introduction
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
Related Work
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
Model
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
Model ::: Hierarchical Attention
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
Model ::: Emotion Encoder
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
Model ::: Decoding
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
Evaluation
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
Evaluation ::: Datasets
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
Evaluation ::: Baselines and Implementation
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
Evaluation ::: Evaluation Metrics
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
Evaluation ::: Evaluation Metrics ::: Human evaluation setup
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
Evaluation ::: Results
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
Evaluation ::: Results ::: Case Study
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
Conclusion and Future Work
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33. | sequence-to-sequence model (denoted as S2S), HRAN |
fb3687ea05d38b5e65fdbbbd1572eacd82f56c0b | fb3687ea05d38b5e65fdbbbd1572eacd82f56c0b_0 | Q: Do they evaluate on relation extraction?
Text: Introduction
Building knowledge graphs (KG) over Web corpora is an important problem that has galvanized effort from multiple communities over two decades BIBREF0 , BIBREF1 . Automated knowledge graph construction from Web resources involves several different phases. The first phase involves domain discovery, which constitutes identification of sources, followed by crawling and scraping of those sources BIBREF2 . A contemporaneous ontology engineering phase is the identification and design of key classes and properties in the domain of interest (the domain ontology) BIBREF3 .
Once a set of (typically unstructured) data sources has been identified, an Information Extraction (IE) system needs to extract structured data from each page in the corpus BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In IE systems based on statistical learning, sequence labeling models like Conditional Random Fields (CRFs) can be trained and used for tagging the scraped text from each data source with terms from the domain ontology BIBREF8 , BIBREF7 . With enough data and computational power, deep neural networks can also be used for a range of collective natural language tasks, including chunking and extraction of named entities and relationships BIBREF9 .
While IE has been well-studied both for cross-domain Web sources (e.g. Wikipedia) and for traditional domains like biomedicine BIBREF10 , BIBREF11 , it is less well-studied (Section "Preprocessing" ) for dynamic domains that undergo frequent changes in content and structure. Such domains include news feeds, social media, advertising, and online marketplaces, but also illicit domains like human trafficking. Automatically constructing knowledge graphs containing important information like ages (of human trafficking victims), locations, prices of services and posting dates over such domains could have widespread social impact, since law enforcement and federal agencies could query such graphs to glean rapid insights BIBREF12 .
Illicit domains pose some formidable challenges for traditional IE systems, including deliberate information obfuscation, non-random misspellings of common words, high occurrences of out-of-vocabulary and uncommon words, frequent (and non-random) use of Unicode characters, sparse content and heterogeneous website structure, to only name a few BIBREF12 , BIBREF13 , BIBREF14 . While some of these characteristics are shared by more traditional domains like chat logs and Twitter, both information obfuscation and extreme content heterogeneity are unique to illicit domains. While this paper only considers the human trafficking domain, similar kinds of problems are prevalent in other illicit domains that have a sizable Web (including Dark Web) footprint, including terrorist activity, and sales of illegal weapons and counterfeit goods BIBREF15 .
As real-world illustrative examples, consider the text fragments `Hey gentleman im neWYOrk and i'm looking for generous...' and `AVAILABLE NOW! ?? - (4 two 4) six 5 two - 0 9 three 1 - 21'. In the first instance, the correct extraction for a Name attribute is neWYOrk, while in the second instance, the correct extraction for an Age attribute is 21. It is not obvious what features should be engineered in a statistical learning-based IE system to achieve robust performance on such text.
To compound the problem, wrapper induction systems from the Web IE literature cannot always be applied in such domains, as many important attributes can only be found in text descriptions, rather than template-based Web extractors that wrappers traditionally rely on BIBREF6 . Constructing an IE system that is robust to these problems is an important first step in delivering structured knowledge bases to investigators and domain experts.
In this paper, we study the problem of robust information extraction in dynamic, illicit domains with unstructured content that does not necessarily correspond to a typical natural language model, and that can vary tremendously between different Web domains, a problem denoted more generally as concept drift BIBREF16 . Illicit domains like human trafficking also tend to exhibit a `long tail'; hence, a comprehensive solution should not rely on information extractors being tailored to pages from a small set of Web domains.
There are two main technical challenges that such domains present to IE systems. First, as the brief examples above illustrate, feature engineering in such domains is difficult, mainly due to the atypical (and varying) representation of information. Second, investigators and domain experts require a lightweight system that can be quickly bootstrapped. Such a system must be able to generalize from few ( $\approx $ 10-150) manual annotations, but be incremental from an engineering perspective, especially since a given illicit Web page can quickly (i.e. within hours) become obsolete in the real world, and the search for leads and information is always ongoing. In effect, the system should be designed for streaming data.
We propose an information extraction approach that is able to address the challenges above, especially the variance between Web pages and the small training set per attribute, by combining two sequential techniques in a novel paradigm. The overall approach is illustrated in Figure 1 . First, a high-recall recognizer, which could range from an exhaustive Linked Data source like GeoNames (e.g. for extracting locations) to a simple regular expression (e.g. for extracting ages), is applied to each page in the corpus to derive a set of candidate annotations for an attribute per page. In the second step, we train and apply a supervised feature-agnostic classification algorithm, based on learning word representations from random projections, to classify each candidate as correct/incorrect for its attribute.
Contributions We summarize our main contributions as follows: (1) We present a lightweight feature-agnostic information extraction system for a highly heterogeneous, illicit domain like human trafficking. Our approach is simple to implement, does not require extensive parameter tuning, infrastructure setup and is incremental with respect to the data, which makes it suitable for deployment in streaming-corpus settings. (2) We show that the approach shows good generalization even when only a small corpus is available after the initial domain-discovery phase, and is robust to the problem of concept drift encountered in large Web corpora. (3) We test our approach extensively on a real-world human trafficking corpus containing hundreds of thousands of Web pages and millions of unique words, many of which are rare and highly domain-specific. Evaluations show that our approach outperforms traditional Named Entity Recognition baselines that require manual feature engineering. Specific empirical highlights are provided below.
Empirical highlights Comparisons against CRF baselines based on the latest Stanford Named Entity Resolution system (including pre-trained models as well as new models that we trained on human trafficking data) show that, on average, across five ground-truth datasets, our approach outperforms the next best system on the recall metric by about 6%, and on the F1-measure metric by almost 20% in low-supervision settings (30% training data), and almost 20% on both metrics in high-supervision settings (70% training data). Concerning efficiency, in a serial environment, we are able to derive word representations on a 43 million word corpus in under an hour. Degradation in average F1-Measure score achieved by the system is less than 2% even when the underlying raw corpus expands by a factor of 18, showing that the approach is reasonably robust to concept drift.
Structure of the paper Section "Preprocessing" describes some related work on Information Extraction. Section "Approach" provides details of key modules in our approach. Section "Evaluations" describes experimental evaluations, and Section "Conclusion" concludes the work.
Related Work
Information Extraction (IE) is a well-studied research area both in the Natural Language Processing community and in the World Wide Web, with the reader referred to the survey by Chang et al. for an accessible coverage of Web IE approaches BIBREF17 . In the NLP literature, IE problems have predominantly been studied as Named Entity Recognition and Relationship Extraction BIBREF7 , BIBREF18 . The scope of Web IE has been broad in recent years, extending from wrappers to Open Information Extraction (OpenIE) BIBREF6 , BIBREF19 .
In the Semantic Web, domain-specific extraction of entities and properties is a fundamental aspect in constructing instance-rich knowledge bases (from unstructured corpora) that contribute to the Semantic Web vision and to ecosystems like Linked Open Data BIBREF20 , BIBREF21 . A good example of such a system is Lodifier BIBREF22 . This work is along the same lines, in that we are interested in user-specified attributes and wish to construct a knowledge base (KB) with those attribute values using raw Web corpora. However, we are not aware of any IE work in the Semantic Web that has used word representations to accomplish this task, or that has otherwise outperformed state-of-the-art systems without manual feature engineering.
The work presented in this paper is structurally similar to the geolocation prediction system (from Twitter) by Han et al. and also ADRMine, an adverse drug reaction (ADR) extraction system from social media BIBREF23 , BIBREF24 . Unlike these works, our system is not optimized for specific attributes like locations and drug reactions, but generalizes to a range of attributes. Also, as mentioned earlier, illicit domains involve challenges not characteristic of social media, notably information obfuscation.
In recent years, state-of-the-art results have been achieved in a variety of NLP tasks using word representation methods like neural embeddings BIBREF25 . Unlike the problem covered in this paper, those papers typically assume an existing KB (e.g. Freebase), and attempt to infer additional facts in the KB using word representations. In contrast, we study the problem of constructing and populating a KB per domain-specific attribute from scratch with only a small set of initial annotations from crawled Web corpora.
The problem studied in this paper also has certain resemblances to OpenIE BIBREF19 . One assumption in OpenIE systems is that a given fact (codified, for example, as an RDF triple) is observed in multiple pages and contexts, which allows the system to learn new `extraction patterns' and rank facts by confidence. In illicit domains, a `fact' may only be observed once; furthermore, the arcane and high-variance language models employed in the domain makes direct application of any extraction pattern-based approach problematic. To the best of our knowledge, the specific problem of devising feature-agnostic, low-supervision IE approaches for illicit Web domains has not been studied in prior work.
Approach
Figure 1 illustrates the architecture of our approach. The input is a Web corpus containing relevant pages from the domain of interest, and high-recall recognizers (described in Section "Applying High-Recall Recognizers" ) typically adapted from freely available Web resources like Github and GeoNames. In keeping with the goals of this work, we do not assume that this initial corpus is static. That is, following an initial short set-up phase, more pages are expected to be added to the corpus in a streaming fashion. Given a set of pre-defined attributes (e.g. City, Name, Age) and around 10-100 manually verified annotations for each attribute, the goal is to learn an IE model that accurately extracts attribute values from each page in the corpus without relying on expert feature engineering. Importantly, while the pages are single-domain (e.g. human trafficking) they are multi-Web domain, meaning that the system must not only handle pages from new websites as they are added to the corpus, but also concept drift in the new pages compared to the initial corpus.
Preprocessing
The first module in Figure 1 is an automated pre-processing algorithm that takes as input a streaming set of HTML pages. In real-world illicit domains, the key information of interest to investigators (e.g. names and ages) typically occurs either in the text or the title of the page, not the template of the website. Even when the information occasionally occurs in a template, it must be appropriately disambiguated to be useful. Wrapper-based IE systems BIBREF6 are often inapplicable as a result. As a first step in building a more suitable IE model, we scrape the text from each HTML website by using a publicly available text extractor called the Readability Text Extractor (RTE). Although multiple tools are available for text extraction from HTML BIBREF26 , our early trials showed that RTE is particularly suitable for noisy Web domains, owing to its tuneability, robustness and support for developers. We tune RTE to achieve high recall, thus ensuring that the relevant text in the page is captured in the scraped text with high probability. Note that, because of the varied structure of websites, such a setting also introduces noise in the scraped text (e.g. wayward HTML tags). Furthermore, unlike natural language documents, scraped text can contain many irrelevant numbers, Unicode and punctuation characters, and may not be regular. Because of the presence of numerous tab and newline markers, there is no obvious natural language sentence structure in the scraped text. In the most general case, we found that RTE returned a set of strings, with each string corresponding to a set of sentences.
To serialize the scraped text as a list of tokens, we use the word and sentence tokenizers from the NLTK package on each RTE string output BIBREF27 . We apply the sentence tokenizer first, and to each sentence returned (which often does not correspond to an actual sentence due to rampant use of extraneous punctuation characters) by the sentence tokenizer, we apply the standard NLTK word tokenizer. The final output of this process is a list of tokens. In the rest of this section, this list of tokens is assumed as representing the HTML page from which the requisite attribute values need to be extracted.
Deriving Word Representations
In principle, given some annotated data, a sequence labeling model like a Conditional Random Field (CRF) can be trained and applied on each block of scraped text to extract values for each attribute BIBREF8 , BIBREF7 . In practice, as we empirically demonstrate in Section "Evaluations" , CRFs prove to be problematic for illicit domains. First, the size of the training data available for each CRF is relatively small, and because of the nature of illicit domains, methods like distant supervision or crowdsourcing cannot be used in an obvious timely manner to elicit annotations from users. A second problem with CRFs, and other traditional machine learning models, is the careful feature engineering that is required for good performance. With small amounts of training data, good features are essential for generalization. In the case of illicit domains, it is not always clear what features are appropriate for a given attribute. Even common features like capitalization can be misleading, as there are many capitalized words in the text that are not of interest (and vice versa).
To alleviate feature engineering and manual annotation effort, we leverage the entire raw corpus in our model learning phase, rather than just the pages that have been annotated. Specifically, we use an unsupervised algorithm to represent each word in the corpus in a low-dimensional vector space. Several algorithms exist in the literature for deriving such representations, including neural embedding algorithms such as Word2vec BIBREF25 and the algorithm by Bollegala et al. BIBREF28 , as well as simpler alternatives BIBREF29 .
Given the dynamic nature of streaming illicit-domain data, and the numerous word representation learning algorithms in the literature, we adapted the random indexing (RI) algorithm for deriving contextual word representations BIBREF29 . Random indexing methods mathematically rely on the Johnson-Lindenstrauss Lemma, which states that if points in a vector space are of sufficiently high dimension, then they may be projected into a suitable lower-dimensional space in a way which approximately preserves the distances between the points.
The original random indexing algorithm was designed for incremental dimensionality reduction and text mining applications. We adapt this algorithm for learning word representations in illicit domains. Before describing these adaptations, we define some key concepts below.
definitionDefinition Given parameters $d \in \mathbb {Z}^{+}$ and $r \in [0, 1]$ , a context vector is defined as a $d-$ dimensional vector, of which exactly $\lfloor d r \rfloor $ elements are randomly set to $+1$ , exactly $\lfloor d r \rfloor $ elements are randomly set to $-1$ and the remaining $d-2\lfloor d r \rfloor $ elements are set to 0.
We denote the parameters $d$ and $r$ in the definition above as the dimension and sparsity ratio parameters respectively.
Intuitively, a context vector is defined for every atomic unit in the corpus. Let us denote the universe of atomic units as $U$ , assumed to be a partially observed countably infinite set. In the current scenario, every unigram (a single `token') in the dataset is considered an atomic unit. Extending the definition to also include higher-order ngrams is straightforward, but was found to be unnecessary in our early empirical investigations. The universe is only partially observed because of the incompleteness (i.e. streaming, dynamic nature) of the initial corpus.
The actual vector space representation of an atomic unit is derived by defining an appropriate context for the unit. Formally, a context is an abstract notion that is used for assigning distributional semantics to the atomic unit. The distributional semantics hypothesis (also called Firth's axiom) states that the semantics of an atomic unit (e.g. a word) is defined by the contexts in which it occurs BIBREF30 .
In this paper, we only consider short contexts appropriate for noisy streaming data. In this vein, we define the notion of a $(u, v)$ -context window below:
Given a list $t$ of atomic units and an integer position $0<i\le |t|$ , a $(u, v)$ -context window is defined by the set $S-t[i]$ , where $S$ is the set of atomic units inclusively spanning positions $max(i-u, 1)$ and $min(i+v, |t|)$
Using just these two definitions, a naive version of the RI algorithm is illustrated in Figure 2 for the sentence `the cow jumped over the moon', assuming a $(2,2)$ -context window and unigrams as atomic units. For each new word encountered by the algorithm, a context vector (Definition "Deriving Word Representations" ) is randomly generated, and the representation vector for the word is initialized to the 0 vector. Once generated, the context vector for the word remains fixed, but the representation vector is updated with each occurrence of the word.
The update happens as follows. Given the context of the word (ranging from a set of 2-4 words), an aggregation is first performed on the corresponding context vectors. In Figure 2 , for example, the aggregation is an unweighted sum. Using the aggregated vector (denoted by the symbol $\vec{a}$ ), we update the representation vector using the equation below, with $\vec{w}_i$ being the representation vector derived after the $i^{th}$ occurrence of word $w$ :
$$\vec{w}_{i+1} = \vec{w}_i+\vec{a}$$ (Eq. 9)
In principle, using this simple algorithm, we could learn a vector space representation for every atomic unit. One issue with a naive embedding of every atomic unit into a vector space is the presence of rare atomic units. These are especially prevalent in illicit domains, not just in the form of rare words, but also as sequences of Unicode characters, sequences of HTML tags, and numeric units (e.g. phone numbers), each of which only occurs a few times (often, only once) in the corpus.
To address this issue, we define below the notion of a compound unit that is based on a pre-specified condition.
Given a universe $U$ of atomic units and a binary condition $R: U \rightarrow \lbrace True,False\rbrace $ , the compound unit $C_R$ is defined as the largest subset of $U$ such that $R$ evaluates to True on every member of $C_R$ .
Example: For `rare' words, we could define the compound unit high-idf-units to contain all atomic units that are below some document frequency threshold (e.g. 1%) in the corpus.
In our implemented prototype, we defined six mutually exclusive compound units, described and enumerated in Table 1 . We modify the naive RI algorithm by only learning a single vector for each compound unit. Intuitively, each atomic unit $w$ in a compound unit $C$ is replaced by a special dummy symbol $w_C$ ; hence, after algorithm execution, each atomic unit in $C$ is represented by the single vector $\vec{w}_C$ .
Applying High-Recall Recognizers
For a given attribute (e.g. City) and a given corpus, we define a recognizer as a function that, if known, can be used to exactly determine the instances of the attribute occurring in the corpus. Formally, A recognizer $R_A$ for attribute $A$ is a function that takes a list $t$ of tokens and positions $i$ and $j >= i$ as inputs, and returns True if the tokens contiguously spanning $t[i]:t[j]$ are instances of $A$ , and False otherwise. It is important to note that, per the definition above, a recognizer cannot annotate latent instances that are not directly observed in the list of tokens.
Since the `ideal' recognizer is not known, the broad goal of IE is to devise models that approximate it (for a given attribute) with high accuracy. Accuracy is typically measured in terms of precision and recall metrics. We formulate a two-pronged approach whereby, rather than develop a single recognizer that has both high precision and recall (and requires considerable expertise to design), we first obtain a list of candidate annotations that have high recall in expectation, and then use supervised classification in a second step to improve precision of the candidate annotations.
More formally, let $R_A$ be denoted as an $\eta $ -recall recognizer if the expected recall of $R_A$ is at least $\eta $ . Due to the explosive growth in data, many resources on the Web can be used for bootstrapping recognizers that are `high-recall' in that $\eta $ is in the range of 90-100%. The high-recall recognizers currently used in the prototype described in this paper (detailed further in Section "System" ) rely on knowledge bases (e.g. GeoNames) from Linked Open Data BIBREF20 , dictionaries from the Web and broad heuristics, such as regular expression extractors, found in public Github repositories. In our experience, we found that even students with basic knowledge of GitHub and Linked Open Data sources are able to construct such recognizers. One important reason why constructing such recognizers is relatively hassle-free is because they are typically monotonic i.e. new heuristics and annotation sources can be freely integrated, since we do not worry about precision at this step.
We note that in some cases, domain knowledge alone is enough to guarantee 100% recall for well-designed recognizers for certain attributes. In HT, this is true for location attributes like city and state, since advertisements tend to state locations without obfuscation, and we use GeoNames, an exhaustive knowledge base of locations, as our recognizer. Manual inspection of the ground-truth data showed that the recall of utilized recognizers for attributes like Name and Age are also high (in many cases, 100%). Thus, although 100% recall cannot be guaranteed for any recognizer, it is still reasonable to assume that $\eta $ is high.
A much more difficult problem is engineering a recognizer to simultaneously achieve high recall and high precision. Even for recognizers based on curated knowledge bases like GeoNames, many non-locations get annotated as locations. For example, the word `nice' is a city in France, but is also a commonly occurring adjective. Other common words like `for', `hot', `com', `kim' and `bella' also occur in GeoNames as cities and would be annotated. Using a standard Named Entity Recognition system does not always work because of the language modeling problem (e.g. missing capitalization) in illicit domains. In the next section, we show how the context surrounding the annotated word can be used to classify the annotation as correct or incorrect. We note that, because the recognizers are high-recall, a successful classifier would yield both high precision and recall.
Supervised Contextual Classifier
To address the precision problem, we train a classifier using contextual features. Rather than rely on a domain expert to provide a set of hand-crafted features, we derive a feature vector per candidate annotation using the notion of a context window (Definition "Deriving Word Representations" ) and the word representation vectors derived in Section "Deriving Word Representations" . This process of supervised contextual classification is illustrated in Figure 3 .
Specifically, for each annotation (which could comprise multiple contiguous tokens e.g. `Salt Lake City' in the list of tokens representing the website) annotated by a recognizer, we consider the tokens in the $(u, v)$ -context window around the annotation. We aggregate the vectors of those tokens into a single vector by performing an unweighted sum, followed by $l2$ -normalization. We use this aggregate vector as the contextual feature vector for that annotation. Note that, unlike the representation learning phase, where the surrounding context vectors were aggregated into an existing representation vector, the contextual feature vector is obtained by summing the actual representation vectors.
For each attribute, a supervised machine learning classifier (e.g. random forest) is trained using between 12-120 labeled annotations, and for new data, the remaining annotations can be classified using the trained classifier. Although the number of dimensions in the feature vectors is quite low compared to tf-idf vectors (hundreds vs. millions), a second round of dimensionality reduction can be applied by using (either supervised or unsupervised) feature selection for further empirical benefits (Section "Evaluations" ).
Datasets and Ground-truths
We train the word representations on four real-world human trafficking datasets of increasing size, the details of which are provided in Table 2 . Since we assume a `streaming' setting in this paper, each larger dataset in Table 2 is a strict superset of the smaller datasets. The largest dataset is itself a subset of the overall human trafficking corpus that was scraped as part of research conducted in the DARPA MEMEX program.
Since ground-truth extractions for the corpus are unknown, we randomly sampled websites from the overall corpus, applied four high-recall recognizers described in Section "System" , and for each annotated set, manually verified whether the extractions were correct or incorrect for the corresponding attribute. The details of this sampled ground-truth are captured in Table 3 . Each annotation set is named using the format GT-{RawField}-{AnnotationAttribute}, where RawField can be either the HTML title or the scraped text (Section "Preprocessing" ). and AnnotationAttribute is the attribute of interest for annotation purposes.
System
The overall system requires developing two components for each attribute: a high-recall recognizer and a classifier for pruning annotations. We developed four high-recall recognizers, namely GeoNames-Cities, GeoNames-States, RegEx-Ages and Dictionary-Names. The first two of these relies on the freely available GeoNames dataset BIBREF31 ; we use the entire dataset for our experiments, which involves modeling each GeoNames dictionary as a trie, owing to its large memory footprint. For extracting ages, we rely on simple regular expressions and heuristics that were empirically verified to capture a broad set of age representations. For the name attribute, we gather freely available Name dictionaries on the Web, in multiple countries and languages, and use the dictionaries in a case-insensitive recognition algorithm to locate names in the raw field (i.e. text or title).
Baselines
We use different variants of the Stanford Named Entity Recognition system (NER) as our baselines BIBREF7 . For the first set of baselines, we use two pre-trained models trained on different English language corpora. Specifically, we use the 3-Class and 4-Class pre-trained models. We use the LOCATION class label for determining city and state annotations, and the PERSON label for name annotations. Unfortunately, there is no specific label corresponding to age annotations in the pre-trained models; hence, we do not use the pre-trained models as age annotation baselines.
It is also possible to re-train the underlying NER system on a new dataset. For the second set of baselines, therefore, we re-train the NER models by randomly sampling 30% and 70% of each annotation set in Table 3 respectively, with the remaining annotations used for testing. The features and values that were employed in the re-trained models are enumerated in Table 4 . Further documentation on these feature settings may be found on the NERFeatureFactory page. All training and testing experiments were done in ten independent trials. We use default parameter settings, and report average results for each experimental run. Experimentation using other configurations, features and values is left for future studies.
Setup and Parameters
Parameter tuning System parameters were set as follows. The number of dimensions in Definition "Deriving Word Representations" was set at 200, and the sparsity ratio was set at 0.01. These parameters are similar to those suggested in previous word representation papers; they were also found to yield intuitive results on semantic similarity experiments (described further in Section "Discussion" ). To avoid the problem of rare words, numbers, punctuation and tags, we used the six compound unit classes earlier described in Table 1 . In all experiments where defining a context was required, we used symmetric $(2,2)$ -context windows; using bigger windows was not found to offer much benefit. We trained a random forest model with default hyperparameters (10 trees, with Gini Impurity as the split criterion) as the supervised classifier, used supervised k-best feature selection with $k$ set to 20 (Section "Supervised Contextual Classifier" ), and with the Analysis of Variance (ANOVA) F-statistic between class label and feature used as the feature scoring function.
Because of the class skew in Table 3 (i.e. the `positive' class is typically much smaller than the `negative' class) we oversampled the positive class for balanced training of the supervised contextual classifier.
Metrics The metrics used for evaluating IE effectiveness are Precision, Recall and F1-measure.
Implementation In the interests of demonstrating a reasonably lightweight system, all experiments in this paper were run on a serial iMac with a 4 GHz Intel core i7 processor and 32 GB RAM. All code (except the Stanford NER code) was written in the Python programming language, and has been made available on a public Github repository with documentation and examples. We used Python's Scikit-learn library (v0.18) for the machine learning components of the prototype.
Results
Performance against baselines Table 5 illustrates system performance on Precision, Recall and F1-Measure metrics against the re-trained and pre-trained baseline models, where the re-trained model and our approach were trained on 30% of the annotations in Table 3 . We used the word representations derived from the D-ALL corpus. On average, the proposed system performs the best on F1-Measure and recall metrics. The re-trained NER is the most precise system, but at the cost of much less recall ( $<$ 30%). The good performance of the pre-trained baseline on the City attribute demonstrates the importance of having a large training corpus, even if the corpus is not directly from the test domain. On the other hand, the complete failure of the pre-trained baseline on the Name attribute illustrates the dangers of using out-of-domain training data. As noted earlier, language models in illicit domains can significantly differ from natural language models; in fact, names in human trafficking websites are often represented in a variety of misleading ways.
Recognizing that 30% training data may constitute a sample size too small to make reliable judgments, we also tabulate the results in Table 6 when the training percentage is set at 70. Performance improves for both the re-trained baseline and our system. Performance declines for the pre-trained baseline, but this may be because of the sparseness of positive annotations in the smaller test set.
We also note that performance is relatively well-balanced for our system; on all datasets and all metrics, the system achieves scores greater than 50%. This suggests that our approach has a degree of robustness that the CRFs are unable to achieve; we believe that this is a direct consequence of using contextual word representation-based feature vectors.
Runtimes We recorded the runtimes for learning word representations using the random indexing algorithm described earlier on the four datasets in Table 2 , and plot the runtimes in Figure 4 as a function of the total number of words in each corpus.
In agreement with the expected theoretical time-complexity of random indexing, the empirical run-time is linear in the number of words, for fixed parameter settings. More importantly, the absolute times show that the algorithm is extremely lightweight: on the D-ALL corpus, we are able to learn representations in under an hour.
We note that these results do not employ any obvious parallelization or the multi-core capabilities of the machine. The linear scaling properties of the algorithm show that it can be used even for very large Web corpora. In future, we will investigate an implementation of the algorithm in a distributed setting.
Robustness to corpus size and quality One issue with using large corpora to derive word representations is concept drift. The D-ALL corpora, for example, contains tens of different Web domains, even though they all pertain to human trafficking. An interesting empirical issue is whether a smaller corpus (e.g. D-10K or D-50K) contains enough data for the derived word representations to converge to reasonable values. Not only would this alleviate initial training times, but it would also partially compensate for concept drift, since it would be expected to contain fewer unique Web domains.
Tables 7 and 8 show that such generalization is possible. The best F1-Measure performance, in fact, is achieved for D-10K, although the average F1-Measures vary by a margin of less than 2% on all cases. We cite this as further evidence of the robustness of the overall approach.
Effects of feature selection Finally, we evaluate the effects of feature selection in Figure 5 on the GT-Text-Name dataset, with training percentage set at 30. The results show that, although performance is reasonably stable for a wide range of $k$ , some feature selection is necessary for better generalization.
Discussion
Table 9 contains some examples (in bold) of cities that got correctly extracted, with the bold term being assigned the highest score by the contextual classifier that was trained for cities. The examples provide good evidence for the kinds of variation (i.e. concept drift) that are often observed in real-world human trafficking data over multiple Web domains. Some domains, for example, were found to have the same kind of structured format as the second row of Table 9 (i.e. Location: followed by the actual locations), but many other domains were far more heterogeneous.
The results in this section also illustrate the merits of unsupervised feature engineering and contextual supervision. In principle, there is no reason why the word representation learning module in Figure 1 cannot be replaced by a more adaptive algorithm like Word2vec BIBREF25 . We note again that, before applying such algorithms, it is important to deal with the heterogeneity problem that arises from having many different Web domains present in the corpus. While earlier results in this section (Tables 7 and 8 ) showed that random indexing is reasonably stable as more websites are added to the corpus, we also verify this robustness qualitatively using a few domain-specific examples in Table 10 . We ran the qualitative experiment as follows: for each seed token (e.g. `tall'), we searched for the two nearest neighbors in the semantic space induced by random indexing by applying cosine similarity, using two different word representation datasets (D-10K and D-ALL). As the results in Table 10 show, the induced distributional semantics are stable; even when the nearest neighbors are different (e.g. for `tall'), their semantics still tend to be similar.
Another important point implied by both the qualitative and quantitative results on D-10K is that random indexing is able to generalize quickly even on small amounts of data. To the best of our knowledge, it is currently an open question (theoretically and empirically), at the time of writing, whether state-of-the-art neural embedding-based word representation learners can (1) generalize on small quantities of data, especially in a single epoch (`streaming data') (2) adequately compensate for concept drift with the same degree of robustness, and in the same lightweight manner, as the random indexing method that we adapted and evaluated in this paper. A broader empirical study on this issue is warranted.
Concerning contextual supervision, we qualitatively visualize the inputs to the contextual city classifier using the t-SNE tool BIBREF32 . We use the ground-truth labels to determine the color of each point in the projected 2d space. The plot in Figure 6 shows that there is a reasonable separation of labels; interestingly there are also `sub-clusters' among the positively labeled points. Each sub-cluster provides evidence for a similar context; the number of sub-clusters even in this small sample of points again illustrates the heterogeneity in the underlying data.
A last issue that we mention is the generalization of the method to more unconventional attributes than the ones evaluated herein. In ongoing work, we have experimented with more domain-specific attributes such as ethnicity (of escorts), and have achieved similar performance. In general, the presented method is applicable whenever the context around the extraction is a suitable clue for disambiguation.
Conclusion
In this paper, we presented a lightweight, feature-agnostic Information Extraction approach that is suitable for illicit Web domains. Our approach relies on unsupervised derivation of word representations from an initial corpus, and the training of a supervised contextual classifier using external high-recall recognizers and a handful of manually verified annotations. Experimental evaluations show that our approach can outperform feature-centric CRF-based approaches for a range of generic attributes. Key modules of our prototype are publicly available (see footnote 15) and can be efficiently bootstrapped in a serial computing environment. Some of these modules are already being used in real-world settings. For example, they were recently released as tools for graduate-level participants in the End Human Trafficking hackathon organized by the office of the District Attorney of New York. At the time of writing, the system is being actively maintained and updated.
Acknowledgements The authors gratefully acknowledge the efforts of Lingzhe Teng, Rahul Kapoor and Vinay Rao Dandin, for sampling and producing the ground-truths in Table 3 . This research is supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under contract number FA8750- 14-C-0240. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, AFRL, or the U.S. Government. | No |
b5d6357d3a9e3d5fdf9b344ae96cddd11a407875 | b5d6357d3a9e3d5fdf9b344ae96cddd11a407875_0 | Q: What is the baseline model for the agreement-based mode?
Text: Introduction
A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.
Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .
Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.
While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.
Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.
To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.
For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset .
An L2-L1 Parallel Corpus
An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.
Our initial collection consists of 1,108,907 sentence pairs from 135,754 essays. As there is lots of noise in raw sentences, we clean up the data by (1) ruling out redundant content, (2) excluding sentences containing foreign words or Chinese phonetic alphabet by checking the Unicode values, (3) dropping overly simple sentences which may not be informative, and (4) utilizing a rule-based classifier to determine whether to include the sentence into the corpus.
The final corpus consists of 717,241 learner sentences from writers of 61 different native languages, in which English and Japanese constitute the majority. As for completeness, 82.78% of the Chinese Second Language sentences on Lang-8 are corrected by native human annotators. One sentence gets corrected approximately 1.53 times on average.
In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors.
The dataset includes four typologically different mother tongues, i.e., English (ENG), Japanese (JPN), Russian (RUS) and Arabic (ARA). Sub-corpus of each language consists of 150 sentence pairs. We take the mother languages of the learners into consideration, which have a great impact on grammatical errors and hence automatic semantic analysis. We hope that four selected mother tongues guarantee a good coverage of typologies. The annotated corpus can be used both for linguistic investigation and as test data for NLP systems.
The Annotation Process
Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.
To create a standard semantic-role-labeled corpus for learner Chinese, we first annotate a 50-sentence trial set for each native language. Two senior students majoring in Applied Linguistics conducted the annotation. Based on a total of 400 sentences, we adjudicate an initial gold standard, adapting and refining CPB specification as our annotation heuristics. Then the two annotators proceed to annotate a 100-sentence set for each language independently. It is on these larger sets that we report the inter-annotator agreement.
In the final stage, we also produce an adjudicated gold standard for all 600 annotated sentences. This was achieved by comparing the annotations selected by each annotator, discussing the differences, and either selecting one as fully correct or creating a hybrid representing the consensus decision for each choice point. When we felt that the decisions were not already fully guided by the existing annotation guidelines, we worked to articulate an extension to the guidelines that would support the decision.
During the annotation, the annotators apply both position labels and semantic role labels. Position labels include S, B, I and E, which are used to mark whether the word is an argument by itself, or at the beginning or in the middle or at the end of a argument. As for role labels, we mainly apply representations defined by CPB BIBREF1 . The predicate in a sentence was labeled as rel, the core semantic roles were labeled as AN and the adjuncts were labeled as AM.
Inter-annotator Agreement
For inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:
The labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.
For the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.
For unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.
Table TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.
We also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument.
Three SRL Systems
The work on SRL has included a broad spectrum of machine learning and deep learning approaches to the task. Early work showed that syntactic information is crucial for learning long-range dependencies, syntactic constituency structure and global constraints BIBREF10 , BIBREF11 , while initial studies on neural methods achieved state-of-the-art results with little to no syntactic input BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 . However, the question whether fully labeled syntactic structures provide an improvement for neural SRL is still unsettled pending further investigation.
To evaluate the robustness of state-of-the-art SRL algorithms, we evaluate two representative SRL frameworks. One is a traditional syntax-based SRL system that leverages a syntactic parser and manually crafted features to obtain explicit information to find semantic roles BIBREF15 , BIBREF16 In particular, we employ the system introduced in BIBREF4 . This system first collects all c-commanders of a predicate in question from the output of a parser and puts them in order. It then employs a first order linear-chain global linear model to perform semantic tagging. For constituent parsing, we use two parsers for comparison, one is Berkeley parser BIBREF5 , a well-known implementation of the unlexicalized latent variable PCFG model, the other is a minimal span-based neural parser based on independent scoring of labels and spans BIBREF6 . As proposed in BIBREF6 , the second parser is capable of achieving state-of-the-art single-model performance on the Penn Treebank. On the Chinese TreeBank BIBREF17 , it also outperforms the Berkeley parser for the in-domain test. We call the corresponding SRL systems as the PCFGLA-parser-based and neural-parser-based systems.
The second SRL framework leverages an end-to-end neural model to implicitly capture local and non-local information BIBREF12 , BIBREF7 . In particular, this framework treats SRL as a BIO tagging problem and uses a stacked BiLSTM to find informative embeddings. We apply the system introduced in BIBREF7 for experiments. Because all syntactic information (including POS tags) is excluded, we call this system the neural syntax-agnostic system.
To train the three SRL systems as well as the supporting parsers, we use the CTB and CPB data . In particular, the sentences selected for the CoNLL 2009 shared task are used here for parameter estimation. Note that, since the Berkeley parser is based on PCFGLA grammar, it may fail to get the syntactic outputs for some sentences, while the other parser does not have that problem. In this case, we have made sure that both parsers can parse all 1,200 sentences successfully.
Main Results
The overall performances of the three SRL systems on both L1 and L2 data (150 parallel sentences for each mother tongue) are shown in Table TABREF11 . For all systems, significant decreases on different mother languages can be consistently observed, highlighting the weakness of applying L1-sentence-trained systems to process learner texts. Comparing the two syntax-based systems with the neural syntax-agnostic system, we find that the overall INLINEFORM0 F, which denotes the F-score drop from L1 to L2, is smaller in the syntax-based framework than in the syntax-agnostic system. On English, Japanese and Russian L2 sentences, the syntax-based system has better performances though it sometimes works worse on the corresponding L1 sentences, indicating the syntax-based systems are more robust when handling learner texts.
Furthermore, the neural-parser-based system achieves the best overall performance on the L2 data. Though performing slightly worse than the neural syntax-agnostic one on the L1 data, it has much smaller INLINEFORM0 F, showing that as the syntactic analysis improves, the performances on both the L1 and L2 data grow, while the gap can be maintained. This demonstrates again the importance of syntax in semantic constructions, especially for learner texts.
Table TABREF45 summarizes the SRL results of the baseline PCFGLA-parser-based model as well as its corresponding retrained models. Since both the syntactic parser and the SRL classifier can be retrained and thus enhanced, we report the individual impact as well as the combined one. We can clearly see that when the PCFGLA parser is retrained with the SRL-consistent sentence pairs, it is able to provide better SRL-oriented syntactic analysis for the L2 sentences as well as their corrections, which are essentially L1 sentences. The outputs of the L1 sentences that are generated by the deep SRL system are also useful for improving the linear SRL classifier. A non-obvious fact is that such a retrained model yields better analysis for not only L1 but also L2 sentences. Fortunately, combining both results in further improvement.
Table TABREF46 shows the results of the parallel experiments based on the neural parser. Different from the PCFGLA model, the SRL-consistent trees only yield a slight improvement on the L2 data. On the contrary, retraining the SRL classifier is much more effective. This experiment highlights the different strengths of different frameworks for parsing. Though for standard in-domain test, the neural parser performs better and thus is more and more popular, for some other scenarios, the PCFGLA model is stronger.
Table TABREF47 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances.
Analysis
To better understand the overall results, we further look deep into the output by addressing the questions:
What types of error negatively impact both systems over learner texts?
What types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?
We first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.
We employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.
Excluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.
On the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.
Previous work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.
Figure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.
On the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.
A second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.
Figure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.
As for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information.
Enhancing SRL with L2-L1 Parallel Data
We explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems.
The Method
For the purpose of harvesting the good automatic syntactic and semantic analysis, we consider the consistency between the automatically produced analysis of a learner sentence and its corresponding well-formed sentence. Determining the measurement metric for comparing predicate–argument structures, however, presents another challenge, because the words of the L2 sentence and its L1 counterpart do not necessarily match. To solve the problem, we use an automatic word aligner. BerkeleyAligner BIBREF19 , a state-of-the-art tool for obtaining a word alignment, is utilized.
The metric for comparing SRL results of two sentences is based on recall of INLINEFORM0 tuples, where INLINEFORM1 is a predicate, INLINEFORM2 is a word that is in the argument or adjunct of INLINEFORM3 and INLINEFORM4 is the corresponding role. Based on a word alignment, we define the shared tuple as a mutual tuple between two SRL results of an L2-L1 sentence pair, meaning that both the predicate and argument words are aligned respectively, and their role relations are the same. We then have two recall values:
L2-recall is (# of shared tuples) / (# of tuples of the result in L2)
L1-recall is (# of shared tuples) / (# of tuples of the result in L1)
In accordance with the above evaluation method, we select the automatic analysis of highest scoring sentences and use them to expand the training data. Sentences whose L1 and L2 recall are both greater than a threshold INLINEFORM0 are taken as good ones. A parser-based SRL system consists of two essential modules: a syntactic parser and a semantic classifier. To enhance the syntactic parser, the automatically generated syntactic trees of the sentence pairs that exhibit high semantic consistency are directly used to extend training data. To improve a semantic classifier, besides the consistent semantic analysis, we also use the outputs of the L1 but not L2 data which are generated by the neural syntax-agnostic SRL system.
Experimental Setup
Our SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.
The sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser.
Conclusion
Statistical models of annotating learner texts are making rapid progress. Although there have been some initial studies on defining annotation specification as well as corpora for syntactic analysis, there is almost no work on semantic parsing for interlanguages. This paper discusses this topic, taking Semantic Role Labeling as a case task and learner Chinese as a case language. We reveal three unknown facts that are important towards a deeper analysis of learner languages: (1) the robustness of language comprehension for interlanguage, (2) the weakness of applying L1-sentence-trained systems to process learner texts, and (3) the significance of syntactic parsing and L2-L1 parallel data in building more generalizable SRL models that transfer better to L2. We have successfully provided a better SRL-oriented syntactic parser as well as a semantic classifier for processing the L2 data by exploring L2-L1 parallel data, supported by a significant numeric improvement over a number of state-of-the-art systems. To the best of our knowledge, this is the first work that demonstrates the effectiveness of large-scale L2-L1 parallel data to enhance the NLP system for learner texts.
Acknowledgement
This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author. | PCFGLA-based parser, viz. Berkeley parser BIBREF5, minimal span-based neural parser BIBREF6 |
f33a21c6a9c75f0479ffdbb006c40e0739134716 | f33a21c6a9c75f0479ffdbb006c40e0739134716_0 | Q: Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?
Text: Introduction
A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.
Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .
Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.
While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.
Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.
To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.
For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset .
An L2-L1 Parallel Corpus
An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.
Our initial collection consists of 1,108,907 sentence pairs from 135,754 essays. As there is lots of noise in raw sentences, we clean up the data by (1) ruling out redundant content, (2) excluding sentences containing foreign words or Chinese phonetic alphabet by checking the Unicode values, (3) dropping overly simple sentences which may not be informative, and (4) utilizing a rule-based classifier to determine whether to include the sentence into the corpus.
The final corpus consists of 717,241 learner sentences from writers of 61 different native languages, in which English and Japanese constitute the majority. As for completeness, 82.78% of the Chinese Second Language sentences on Lang-8 are corrected by native human annotators. One sentence gets corrected approximately 1.53 times on average.
In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors.
The dataset includes four typologically different mother tongues, i.e., English (ENG), Japanese (JPN), Russian (RUS) and Arabic (ARA). Sub-corpus of each language consists of 150 sentence pairs. We take the mother languages of the learners into consideration, which have a great impact on grammatical errors and hence automatic semantic analysis. We hope that four selected mother tongues guarantee a good coverage of typologies. The annotated corpus can be used both for linguistic investigation and as test data for NLP systems.
The Annotation Process
Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.
To create a standard semantic-role-labeled corpus for learner Chinese, we first annotate a 50-sentence trial set for each native language. Two senior students majoring in Applied Linguistics conducted the annotation. Based on a total of 400 sentences, we adjudicate an initial gold standard, adapting and refining CPB specification as our annotation heuristics. Then the two annotators proceed to annotate a 100-sentence set for each language independently. It is on these larger sets that we report the inter-annotator agreement.
In the final stage, we also produce an adjudicated gold standard for all 600 annotated sentences. This was achieved by comparing the annotations selected by each annotator, discussing the differences, and either selecting one as fully correct or creating a hybrid representing the consensus decision for each choice point. When we felt that the decisions were not already fully guided by the existing annotation guidelines, we worked to articulate an extension to the guidelines that would support the decision.
During the annotation, the annotators apply both position labels and semantic role labels. Position labels include S, B, I and E, which are used to mark whether the word is an argument by itself, or at the beginning or in the middle or at the end of a argument. As for role labels, we mainly apply representations defined by CPB BIBREF1 . The predicate in a sentence was labeled as rel, the core semantic roles were labeled as AN and the adjuncts were labeled as AM.
Inter-annotator Agreement
For inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:
The labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.
For the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.
For unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.
Table TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.
We also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument.
Three SRL Systems
The work on SRL has included a broad spectrum of machine learning and deep learning approaches to the task. Early work showed that syntactic information is crucial for learning long-range dependencies, syntactic constituency structure and global constraints BIBREF10 , BIBREF11 , while initial studies on neural methods achieved state-of-the-art results with little to no syntactic input BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 . However, the question whether fully labeled syntactic structures provide an improvement for neural SRL is still unsettled pending further investigation.
To evaluate the robustness of state-of-the-art SRL algorithms, we evaluate two representative SRL frameworks. One is a traditional syntax-based SRL system that leverages a syntactic parser and manually crafted features to obtain explicit information to find semantic roles BIBREF15 , BIBREF16 In particular, we employ the system introduced in BIBREF4 . This system first collects all c-commanders of a predicate in question from the output of a parser and puts them in order. It then employs a first order linear-chain global linear model to perform semantic tagging. For constituent parsing, we use two parsers for comparison, one is Berkeley parser BIBREF5 , a well-known implementation of the unlexicalized latent variable PCFG model, the other is a minimal span-based neural parser based on independent scoring of labels and spans BIBREF6 . As proposed in BIBREF6 , the second parser is capable of achieving state-of-the-art single-model performance on the Penn Treebank. On the Chinese TreeBank BIBREF17 , it also outperforms the Berkeley parser for the in-domain test. We call the corresponding SRL systems as the PCFGLA-parser-based and neural-parser-based systems.
The second SRL framework leverages an end-to-end neural model to implicitly capture local and non-local information BIBREF12 , BIBREF7 . In particular, this framework treats SRL as a BIO tagging problem and uses a stacked BiLSTM to find informative embeddings. We apply the system introduced in BIBREF7 for experiments. Because all syntactic information (including POS tags) is excluded, we call this system the neural syntax-agnostic system.
To train the three SRL systems as well as the supporting parsers, we use the CTB and CPB data . In particular, the sentences selected for the CoNLL 2009 shared task are used here for parameter estimation. Note that, since the Berkeley parser is based on PCFGLA grammar, it may fail to get the syntactic outputs for some sentences, while the other parser does not have that problem. In this case, we have made sure that both parsers can parse all 1,200 sentences successfully.
Main Results
The overall performances of the three SRL systems on both L1 and L2 data (150 parallel sentences for each mother tongue) are shown in Table TABREF11 . For all systems, significant decreases on different mother languages can be consistently observed, highlighting the weakness of applying L1-sentence-trained systems to process learner texts. Comparing the two syntax-based systems with the neural syntax-agnostic system, we find that the overall INLINEFORM0 F, which denotes the F-score drop from L1 to L2, is smaller in the syntax-based framework than in the syntax-agnostic system. On English, Japanese and Russian L2 sentences, the syntax-based system has better performances though it sometimes works worse on the corresponding L1 sentences, indicating the syntax-based systems are more robust when handling learner texts.
Furthermore, the neural-parser-based system achieves the best overall performance on the L2 data. Though performing slightly worse than the neural syntax-agnostic one on the L1 data, it has much smaller INLINEFORM0 F, showing that as the syntactic analysis improves, the performances on both the L1 and L2 data grow, while the gap can be maintained. This demonstrates again the importance of syntax in semantic constructions, especially for learner texts.
Table TABREF45 summarizes the SRL results of the baseline PCFGLA-parser-based model as well as its corresponding retrained models. Since both the syntactic parser and the SRL classifier can be retrained and thus enhanced, we report the individual impact as well as the combined one. We can clearly see that when the PCFGLA parser is retrained with the SRL-consistent sentence pairs, it is able to provide better SRL-oriented syntactic analysis for the L2 sentences as well as their corrections, which are essentially L1 sentences. The outputs of the L1 sentences that are generated by the deep SRL system are also useful for improving the linear SRL classifier. A non-obvious fact is that such a retrained model yields better analysis for not only L1 but also L2 sentences. Fortunately, combining both results in further improvement.
Table TABREF46 shows the results of the parallel experiments based on the neural parser. Different from the PCFGLA model, the SRL-consistent trees only yield a slight improvement on the L2 data. On the contrary, retraining the SRL classifier is much more effective. This experiment highlights the different strengths of different frameworks for parsing. Though for standard in-domain test, the neural parser performs better and thus is more and more popular, for some other scenarios, the PCFGLA model is stronger.
Table TABREF47 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances.
Analysis
To better understand the overall results, we further look deep into the output by addressing the questions:
What types of error negatively impact both systems over learner texts?
What types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?
We first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.
We employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.
Excluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.
On the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.
Previous work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.
Figure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.
On the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.
A second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.
Figure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.
As for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information.
Enhancing SRL with L2-L1 Parallel Data
We explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems.
The Method
For the purpose of harvesting the good automatic syntactic and semantic analysis, we consider the consistency between the automatically produced analysis of a learner sentence and its corresponding well-formed sentence. Determining the measurement metric for comparing predicate–argument structures, however, presents another challenge, because the words of the L2 sentence and its L1 counterpart do not necessarily match. To solve the problem, we use an automatic word aligner. BerkeleyAligner BIBREF19 , a state-of-the-art tool for obtaining a word alignment, is utilized.
The metric for comparing SRL results of two sentences is based on recall of INLINEFORM0 tuples, where INLINEFORM1 is a predicate, INLINEFORM2 is a word that is in the argument or adjunct of INLINEFORM3 and INLINEFORM4 is the corresponding role. Based on a word alignment, we define the shared tuple as a mutual tuple between two SRL results of an L2-L1 sentence pair, meaning that both the predicate and argument words are aligned respectively, and their role relations are the same. We then have two recall values:
L2-recall is (# of shared tuples) / (# of tuples of the result in L2)
L1-recall is (# of shared tuples) / (# of tuples of the result in L1)
In accordance with the above evaluation method, we select the automatic analysis of highest scoring sentences and use them to expand the training data. Sentences whose L1 and L2 recall are both greater than a threshold INLINEFORM0 are taken as good ones. A parser-based SRL system consists of two essential modules: a syntactic parser and a semantic classifier. To enhance the syntactic parser, the automatically generated syntactic trees of the sentence pairs that exhibit high semantic consistency are directly used to extend training data. To improve a semantic classifier, besides the consistent semantic analysis, we also use the outputs of the L1 but not L2 data which are generated by the neural syntax-agnostic SRL system.
Experimental Setup
Our SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.
The sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser.
Conclusion
Statistical models of annotating learner texts are making rapid progress. Although there have been some initial studies on defining annotation specification as well as corpora for syntactic analysis, there is almost no work on semantic parsing for interlanguages. This paper discusses this topic, taking Semantic Role Labeling as a case task and learner Chinese as a case language. We reveal three unknown facts that are important towards a deeper analysis of learner languages: (1) the robustness of language comprehension for interlanguage, (2) the weakness of applying L1-sentence-trained systems to process learner texts, and (3) the significance of syntactic parsing and L2-L1 parallel data in building more generalizable SRL models that transfer better to L2. We have successfully provided a better SRL-oriented syntactic parser as well as a semantic classifier for processing the L2 data by exploring L2-L1 parallel data, supported by a significant numeric improvement over a number of state-of-the-art systems. To the best of our knowledge, this is the first work that demonstrates the effectiveness of large-scale L2-L1 parallel data to enhance the NLP system for learner texts.
Acknowledgement
This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author. | syntax-based system may generate correct syntactic analyses for partial grammatical fragments |
8a1d4ed00d31c1f1cb05bc9d5e4f05fe87b0e5a4 | 8a1d4ed00d31c1f1cb05bc9d5e4f05fe87b0e5a4_0 | Q: Who manually annotated the semantic roles for the set of learner texts?
Text: Introduction
A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.
Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .
Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.
While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.
Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.
To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.
For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset .
An L2-L1 Parallel Corpus
An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.
Our initial collection consists of 1,108,907 sentence pairs from 135,754 essays. As there is lots of noise in raw sentences, we clean up the data by (1) ruling out redundant content, (2) excluding sentences containing foreign words or Chinese phonetic alphabet by checking the Unicode values, (3) dropping overly simple sentences which may not be informative, and (4) utilizing a rule-based classifier to determine whether to include the sentence into the corpus.
The final corpus consists of 717,241 learner sentences from writers of 61 different native languages, in which English and Japanese constitute the majority. As for completeness, 82.78% of the Chinese Second Language sentences on Lang-8 are corrected by native human annotators. One sentence gets corrected approximately 1.53 times on average.
In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors.
The dataset includes four typologically different mother tongues, i.e., English (ENG), Japanese (JPN), Russian (RUS) and Arabic (ARA). Sub-corpus of each language consists of 150 sentence pairs. We take the mother languages of the learners into consideration, which have a great impact on grammatical errors and hence automatic semantic analysis. We hope that four selected mother tongues guarantee a good coverage of typologies. The annotated corpus can be used both for linguistic investigation and as test data for NLP systems.
The Annotation Process
Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.
To create a standard semantic-role-labeled corpus for learner Chinese, we first annotate a 50-sentence trial set for each native language. Two senior students majoring in Applied Linguistics conducted the annotation. Based on a total of 400 sentences, we adjudicate an initial gold standard, adapting and refining CPB specification as our annotation heuristics. Then the two annotators proceed to annotate a 100-sentence set for each language independently. It is on these larger sets that we report the inter-annotator agreement.
In the final stage, we also produce an adjudicated gold standard for all 600 annotated sentences. This was achieved by comparing the annotations selected by each annotator, discussing the differences, and either selecting one as fully correct or creating a hybrid representing the consensus decision for each choice point. When we felt that the decisions were not already fully guided by the existing annotation guidelines, we worked to articulate an extension to the guidelines that would support the decision.
During the annotation, the annotators apply both position labels and semantic role labels. Position labels include S, B, I and E, which are used to mark whether the word is an argument by itself, or at the beginning or in the middle or at the end of a argument. As for role labels, we mainly apply representations defined by CPB BIBREF1 . The predicate in a sentence was labeled as rel, the core semantic roles were labeled as AN and the adjuncts were labeled as AM.
Inter-annotator Agreement
For inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:
The labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.
For the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.
For unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.
Table TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.
We also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument.
Three SRL Systems
The work on SRL has included a broad spectrum of machine learning and deep learning approaches to the task. Early work showed that syntactic information is crucial for learning long-range dependencies, syntactic constituency structure and global constraints BIBREF10 , BIBREF11 , while initial studies on neural methods achieved state-of-the-art results with little to no syntactic input BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 . However, the question whether fully labeled syntactic structures provide an improvement for neural SRL is still unsettled pending further investigation.
To evaluate the robustness of state-of-the-art SRL algorithms, we evaluate two representative SRL frameworks. One is a traditional syntax-based SRL system that leverages a syntactic parser and manually crafted features to obtain explicit information to find semantic roles BIBREF15 , BIBREF16 In particular, we employ the system introduced in BIBREF4 . This system first collects all c-commanders of a predicate in question from the output of a parser and puts them in order. It then employs a first order linear-chain global linear model to perform semantic tagging. For constituent parsing, we use two parsers for comparison, one is Berkeley parser BIBREF5 , a well-known implementation of the unlexicalized latent variable PCFG model, the other is a minimal span-based neural parser based on independent scoring of labels and spans BIBREF6 . As proposed in BIBREF6 , the second parser is capable of achieving state-of-the-art single-model performance on the Penn Treebank. On the Chinese TreeBank BIBREF17 , it also outperforms the Berkeley parser for the in-domain test. We call the corresponding SRL systems as the PCFGLA-parser-based and neural-parser-based systems.
The second SRL framework leverages an end-to-end neural model to implicitly capture local and non-local information BIBREF12 , BIBREF7 . In particular, this framework treats SRL as a BIO tagging problem and uses a stacked BiLSTM to find informative embeddings. We apply the system introduced in BIBREF7 for experiments. Because all syntactic information (including POS tags) is excluded, we call this system the neural syntax-agnostic system.
To train the three SRL systems as well as the supporting parsers, we use the CTB and CPB data . In particular, the sentences selected for the CoNLL 2009 shared task are used here for parameter estimation. Note that, since the Berkeley parser is based on PCFGLA grammar, it may fail to get the syntactic outputs for some sentences, while the other parser does not have that problem. In this case, we have made sure that both parsers can parse all 1,200 sentences successfully.
Main Results
The overall performances of the three SRL systems on both L1 and L2 data (150 parallel sentences for each mother tongue) are shown in Table TABREF11 . For all systems, significant decreases on different mother languages can be consistently observed, highlighting the weakness of applying L1-sentence-trained systems to process learner texts. Comparing the two syntax-based systems with the neural syntax-agnostic system, we find that the overall INLINEFORM0 F, which denotes the F-score drop from L1 to L2, is smaller in the syntax-based framework than in the syntax-agnostic system. On English, Japanese and Russian L2 sentences, the syntax-based system has better performances though it sometimes works worse on the corresponding L1 sentences, indicating the syntax-based systems are more robust when handling learner texts.
Furthermore, the neural-parser-based system achieves the best overall performance on the L2 data. Though performing slightly worse than the neural syntax-agnostic one on the L1 data, it has much smaller INLINEFORM0 F, showing that as the syntactic analysis improves, the performances on both the L1 and L2 data grow, while the gap can be maintained. This demonstrates again the importance of syntax in semantic constructions, especially for learner texts.
Table TABREF45 summarizes the SRL results of the baseline PCFGLA-parser-based model as well as its corresponding retrained models. Since both the syntactic parser and the SRL classifier can be retrained and thus enhanced, we report the individual impact as well as the combined one. We can clearly see that when the PCFGLA parser is retrained with the SRL-consistent sentence pairs, it is able to provide better SRL-oriented syntactic analysis for the L2 sentences as well as their corrections, which are essentially L1 sentences. The outputs of the L1 sentences that are generated by the deep SRL system are also useful for improving the linear SRL classifier. A non-obvious fact is that such a retrained model yields better analysis for not only L1 but also L2 sentences. Fortunately, combining both results in further improvement.
Table TABREF46 shows the results of the parallel experiments based on the neural parser. Different from the PCFGLA model, the SRL-consistent trees only yield a slight improvement on the L2 data. On the contrary, retraining the SRL classifier is much more effective. This experiment highlights the different strengths of different frameworks for parsing. Though for standard in-domain test, the neural parser performs better and thus is more and more popular, for some other scenarios, the PCFGLA model is stronger.
Table TABREF47 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances.
Analysis
To better understand the overall results, we further look deep into the output by addressing the questions:
What types of error negatively impact both systems over learner texts?
What types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?
We first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.
We employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.
Excluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.
On the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.
Previous work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.
Figure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.
On the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.
A second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.
Figure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.
As for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information.
Enhancing SRL with L2-L1 Parallel Data
We explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems.
The Method
For the purpose of harvesting the good automatic syntactic and semantic analysis, we consider the consistency between the automatically produced analysis of a learner sentence and its corresponding well-formed sentence. Determining the measurement metric for comparing predicate–argument structures, however, presents another challenge, because the words of the L2 sentence and its L1 counterpart do not necessarily match. To solve the problem, we use an automatic word aligner. BerkeleyAligner BIBREF19 , a state-of-the-art tool for obtaining a word alignment, is utilized.
The metric for comparing SRL results of two sentences is based on recall of INLINEFORM0 tuples, where INLINEFORM1 is a predicate, INLINEFORM2 is a word that is in the argument or adjunct of INLINEFORM3 and INLINEFORM4 is the corresponding role. Based on a word alignment, we define the shared tuple as a mutual tuple between two SRL results of an L2-L1 sentence pair, meaning that both the predicate and argument words are aligned respectively, and their role relations are the same. We then have two recall values:
L2-recall is (# of shared tuples) / (# of tuples of the result in L2)
L1-recall is (# of shared tuples) / (# of tuples of the result in L1)
In accordance with the above evaluation method, we select the automatic analysis of highest scoring sentences and use them to expand the training data. Sentences whose L1 and L2 recall are both greater than a threshold INLINEFORM0 are taken as good ones. A parser-based SRL system consists of two essential modules: a syntactic parser and a semantic classifier. To enhance the syntactic parser, the automatically generated syntactic trees of the sentence pairs that exhibit high semantic consistency are directly used to extend training data. To improve a semantic classifier, besides the consistent semantic analysis, we also use the outputs of the L1 but not L2 data which are generated by the neural syntax-agnostic SRL system.
Experimental Setup
Our SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.
The sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser.
Conclusion
Statistical models of annotating learner texts are making rapid progress. Although there have been some initial studies on defining annotation specification as well as corpora for syntactic analysis, there is almost no work on semantic parsing for interlanguages. This paper discusses this topic, taking Semantic Role Labeling as a case task and learner Chinese as a case language. We reveal three unknown facts that are important towards a deeper analysis of learner languages: (1) the robustness of language comprehension for interlanguage, (2) the weakness of applying L1-sentence-trained systems to process learner texts, and (3) the significance of syntactic parsing and L2-L1 parallel data in building more generalizable SRL models that transfer better to L2. We have successfully provided a better SRL-oriented syntactic parser as well as a semantic classifier for processing the L2 data by exploring L2-L1 parallel data, supported by a significant numeric improvement over a number of state-of-the-art systems. To the best of our knowledge, this is the first work that demonstrates the effectiveness of large-scale L2-L1 parallel data to enhance the NLP system for learner texts.
Acknowledgement
This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author. | Authors |
17f5f4a5d943c91d46552fb75940b67a72144697 | 17f5f4a5d943c91d46552fb75940b67a72144697_0 | Q: By how much do they outperform existing state-of-the-art VQA models?
Text: Introduction
We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.
The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 .
We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.
From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.
The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations.
Related Work
Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.
The strategy to combine the textual and visual embeddings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. BIBREF2 propose an element-wise multiplication between image and question embeddings to generate spatial attention map. Fukui et al. BIBREF5 propose multimodal compact bilinear pooling (MCB) to efficiently implement an outer product operator that combines visual and textual representations. Yu et al. BIBREF6 extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efficiently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. BIBREF7 embed the textual question as an intermediate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. BIBREF8 propose a model that learns a set of task-specific neural modules that are jointly trained to answer visual questions.
Following the successful introduction of soft attention in neural machine translation applications BIBREF9 , most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefficients over a set of predefined image regions. These coefficients are then used to weight the embedding of the image regions to obtain a suitable descriptor BIBREF10 , BIBREF11 , BIBREF5 , BIBREF12 , BIBREF6 . More elaborated forms of attention has also been proposed. Xu and Saenko BIBREF13 suggest use word-level embedding to generate attention. Yang et al. BIBREF14 iterates the application of a soft-attention mechanism over the visual input as a way to progressively refine the location of relevant cues to answer the question. Lu et al. BIBREF15 proposes a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a visual guided attention over the input question.
In all the previous cases, the attention mechanism is applied using an unsupervised scheme, where attention coefficients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem BIBREF4 , BIBREF16 , BIBREF17 . Das et al. BIBREF4 compare the image areas selected by humans and state-of-the-art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to answer questions from the VQA dataset BIBREF2 . Interestingly, this study concludes that current machine-generated attention maps exhibit a poor correlation with respect to the human counterpart, suggesting that humans use different visual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the attention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our findings in this work support this hypothesis.
Related to the work in BIBREF4 , Gan et al. BIBREF16 apply a more structured approach to identify the image areas used by humans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA technique with these attention features, they are able to achieve a small boost in performance. Closely related to our approach, Qiao et al. BIBREF17 use the attention labels in the VQA-HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual question. This network generates a set of attention proposals for each image in the VQA dataset, which are used as labels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they differ from our work in the method to integrate attention labels to a VQA model.
VQA Model Structure
Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.
Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features INLINEFORM0 , where INLINEFORM1 is the maximum number of words in the tokenized version of the question and INLINEFORM2 is the dimensionality of the hidden state of the LSTM. Additionally, following BIBREF12 , a question attention mechanism is added that generates question attention coefficients INLINEFORM3 , where INLINEFORM4 is the so-called number of “glimpses”. The purpose of INLINEFORM5 is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use INLINEFORM6 . The weighted question features INLINEFORM7 are then computed using a soft attention mechanism BIBREF9 , which is essentially a weighted sum of the INLINEFORM8 word features followed by a concatenation according to INLINEFORM9 .
Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset BIBREF18 . This generates image features INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefficients. First, question features INLINEFORM4 are tiled as the same spatial shape of INLINEFORM5 . Afterwards, the fusion module models the joint relationship INLINEFORM6 between questions and images, mapping them to a common space INLINEFORM7 . In the simplest case, one can implement the fusion module using either concatenation or Hadamard product BIBREF19 , but more effective pooling schemes can be applied BIBREF5 , BIBREF20 , BIBREF12 , BIBREF6 . The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent relationship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefficient INLINEFORM8 , with which we can obtain attention-weighted visual features INLINEFORM9 . Again, INLINEFORM10 is the number of “glimpses”, where we use INLINEFORM11 .
Classification Module: Using the compact representation of questions INLINEFORM0 and visual information INLINEFORM1 , the classification module applies first the Fusion Module II that provides the feature representation of answers INLINEFORM2 , where INLINEFORM3 is the latent answer space. Afterwards, it computes the logits over a set of predefined candidate answers. Following previous work BIBREF5 , we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer INLINEFORM4 .
Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Module as an auxiliary classification task, where ground-truth visual grounding labels INLINEFORM0 are used to guide the model to focus on meaningful parts of the image to answer each question. To do that, we simply treat the generated attention coefficients INLINEFORM1 as a probability distribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object-level groundings, as shown in Figure FIGREF3 . Sections SECREF4 and SECREF5 provide details about our proposed method to obtain the attention labels and to train the resulting model, respectively.
Mining Attention Supervision from Visual Genome
Visual Genome (VG) BIBREF21 includes the largest VQA dataset currently available, which consists of 1.7M QA pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these region and object annotations provide complementary information. As an example, as shown in Figure FIGREF3 , for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of specific objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual grounding as an auxiliary task for VQA.
For region annotations, we propose a simple heuristic to mine visual groundings: for each INLINEFORM0 we enumerate all the region descriptions of INLINEFORM1 and pick the description INLINEFORM2 that has the most (at least two) overlapped informative words with INLINEFORM3 and INLINEFORM4 . Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in INLINEFORM5 or INLINEFORM6 are the same; (2) Their lemmatizations (using NLTK BIBREF22 ) are the same; (3) Their synsets in WordNet BIBREF23 are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure FIGREF3 (a) illustrates an example of a region-level grounding.
In terms of object annotations, for each image in a INLINEFORM0 triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in INLINEFORM1 or INLINEFORM2 . To score each match, we use the same criteria as region-level groundings. Additionally, if a triplet INLINEFORM3 has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further refinement, selected objects grounding are passed through an intersection over union filter to account for the fact that VG usually includes multiple labels for the same object instance. As a final consideration, for questions related to counting, region-level groundings are discarded after the corresponding object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure FIGREF3 (b) illustrates an example of an object-level grounding.
As a result, combining both region-level and object-level groundings, about 700K out of 1M INLINEFORM0 triplets in VG end up with valid grounding labels. We will make these labels publicly available.
Implementation Details
We build the attention supervision on top of the open-sourced implementation of MCB BIBREF5 and MFB BIBREF12 . Similar to them, We extract the image feature from res5c layer of Resnet-152, resulting in INLINEFORM0 spatial grid ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). We construct our ground-truth visual grounding labels to be INLINEFORM4 glimpse maps per QA pair, where the first map is object-level grounding and the second map is region-level grounding, as discussed in Section SECREF4 . Let INLINEFORM5 be the coordinate of INLINEFORM6 selected object bounding box in the grounding labels, then the mined object-level attention maps INLINEFORM7 are: DISPLAYFORM0
where INLINEFORM0 is the indicator function. Similarly, the region-level attention maps INLINEFORM1 are: DISPLAYFORM0
Afterwards, INLINEFORM0 and INLINEFORM1 are spatially L1-normalized to represent probabilities and concatenated to form INLINEFORM2 .
The model is trained using a multi-task loss, DISPLAYFORM0
where INLINEFORM0 denotes cross-entropy and INLINEFORM1 denotes KL-divergence. INLINEFORM2 corresponds to the learned parameters. INLINEFORM3 is a scalar that weights the loss terms. This scalar decays as a function of the iteration number INLINEFORM4 . In particular, we choose to use a cosine-decay function: DISPLAYFORM0
This is motivated by the fact that the visual grounding labels have some level of subjectivity. As an example, Figure FIGREF11 (second row) shows a case where the learned attention seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what attention to use. It is important to note that, for training samples in VQA-2.0 or VG that do not have region-level or object-level grounding labels, INLINEFORM0 in Equation EQREF6 , so the loss is reduced to the classification term only. In our experiment, INLINEFORM1 is calibrated for each tested model based on the number of training steps. In particular, we choose INLINEFORM2 for all MCB models and INLINEFORM3 for others.
Datasets
VQA-2.0: The VQA-2.0 dataset BIBREF2 consists of 204721 images, with a total of 1.1M questions and 10 crowd-sourced answers per question. There are more than 20 question types, covering a variety of topics and free-form answers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K questions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer INLINEFORM0 given a corresponding image-question pair INLINEFORM1 . As a main advantage with respect to version 1.0 BIBREF2 , for every question VQA-2.0 includes complementary images that lead to different answers, reducing language bias by forcing the model to use the visual information.
Visual Genome: The Visual Genome (VG) dataset BIBREF21 contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from BIBREF5 , where non-informative words in the questions and answers such as “a” and “is” are removed. Afterwards, INLINEFORM0 triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase description and bounding box coordinates.
VQA-HAT: VQA-HAT dataset BIBREF4 contains 58475 human visual attention heat (HAT) maps for INLINEFORM0 triplets in VQA-1.0 training set. Annotators were shown a blurred image, a INLINEFORM1 pair and were asked to “scratch” the image until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect INLINEFORM2 HAT maps for VQA-1.0 validation sets, where each of the 1374 INLINEFORM3 were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model attention, as in BIBREF4 , BIBREF24 .
VQA-X: VQA-X dataset BIBREF24 contains 2000 labeled attention maps in VQA-2.0 validation sets. In contrast to VQA-HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment objects and/or regions that most prominently justify the answer. Hence the attentions are more specific and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in BIBREF4 , BIBREF24 .
Results
We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0
Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
Table TABREF10 also reports the result of an experiment where the decaying factor INLINEFORM0 in Equation EQREF7 is fixed to a value of 1. In this case, the model is able to achieve higher rank-correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the final training steps, which affects the accuracy of the classification module.
Figure FIGREF11 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no-attn model.
Conclusions
In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA systems, while also providing interpretable representations in the form of an explicitly trainable visual attention mechanism. Specifically, as a main result, our experiments provide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previous works by a large margin.
As further contributions, we highlight two relevant insides of the proposed approach. On one side, by using attention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal representation of the model in such a way that it fosters the encoding of interpretable representations of the underlying relations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and object labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.
As future work, we believe that the superior visual grounding provided by the proposed method can play a relevant role to generate natural language explanations to justify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.
Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Foundational Research on Data. | the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X |
83f22814aaed9b5f882168e22a3eac8f5fda3882 | 83f22814aaed9b5f882168e22a3eac8f5fda3882_0 | Q: How do they measure the correlation between manual groundings and model generated ones?
Text: Introduction
We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.
The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 .
We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.
From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.
The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations.
Related Work
Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.
The strategy to combine the textual and visual embeddings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. BIBREF2 propose an element-wise multiplication between image and question embeddings to generate spatial attention map. Fukui et al. BIBREF5 propose multimodal compact bilinear pooling (MCB) to efficiently implement an outer product operator that combines visual and textual representations. Yu et al. BIBREF6 extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efficiently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. BIBREF7 embed the textual question as an intermediate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. BIBREF8 propose a model that learns a set of task-specific neural modules that are jointly trained to answer visual questions.
Following the successful introduction of soft attention in neural machine translation applications BIBREF9 , most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefficients over a set of predefined image regions. These coefficients are then used to weight the embedding of the image regions to obtain a suitable descriptor BIBREF10 , BIBREF11 , BIBREF5 , BIBREF12 , BIBREF6 . More elaborated forms of attention has also been proposed. Xu and Saenko BIBREF13 suggest use word-level embedding to generate attention. Yang et al. BIBREF14 iterates the application of a soft-attention mechanism over the visual input as a way to progressively refine the location of relevant cues to answer the question. Lu et al. BIBREF15 proposes a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a visual guided attention over the input question.
In all the previous cases, the attention mechanism is applied using an unsupervised scheme, where attention coefficients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem BIBREF4 , BIBREF16 , BIBREF17 . Das et al. BIBREF4 compare the image areas selected by humans and state-of-the-art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to answer questions from the VQA dataset BIBREF2 . Interestingly, this study concludes that current machine-generated attention maps exhibit a poor correlation with respect to the human counterpart, suggesting that humans use different visual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the attention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our findings in this work support this hypothesis.
Related to the work in BIBREF4 , Gan et al. BIBREF16 apply a more structured approach to identify the image areas used by humans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA technique with these attention features, they are able to achieve a small boost in performance. Closely related to our approach, Qiao et al. BIBREF17 use the attention labels in the VQA-HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual question. This network generates a set of attention proposals for each image in the VQA dataset, which are used as labels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they differ from our work in the method to integrate attention labels to a VQA model.
VQA Model Structure
Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.
Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features INLINEFORM0 , where INLINEFORM1 is the maximum number of words in the tokenized version of the question and INLINEFORM2 is the dimensionality of the hidden state of the LSTM. Additionally, following BIBREF12 , a question attention mechanism is added that generates question attention coefficients INLINEFORM3 , where INLINEFORM4 is the so-called number of “glimpses”. The purpose of INLINEFORM5 is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use INLINEFORM6 . The weighted question features INLINEFORM7 are then computed using a soft attention mechanism BIBREF9 , which is essentially a weighted sum of the INLINEFORM8 word features followed by a concatenation according to INLINEFORM9 .
Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset BIBREF18 . This generates image features INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefficients. First, question features INLINEFORM4 are tiled as the same spatial shape of INLINEFORM5 . Afterwards, the fusion module models the joint relationship INLINEFORM6 between questions and images, mapping them to a common space INLINEFORM7 . In the simplest case, one can implement the fusion module using either concatenation or Hadamard product BIBREF19 , but more effective pooling schemes can be applied BIBREF5 , BIBREF20 , BIBREF12 , BIBREF6 . The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent relationship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefficient INLINEFORM8 , with which we can obtain attention-weighted visual features INLINEFORM9 . Again, INLINEFORM10 is the number of “glimpses”, where we use INLINEFORM11 .
Classification Module: Using the compact representation of questions INLINEFORM0 and visual information INLINEFORM1 , the classification module applies first the Fusion Module II that provides the feature representation of answers INLINEFORM2 , where INLINEFORM3 is the latent answer space. Afterwards, it computes the logits over a set of predefined candidate answers. Following previous work BIBREF5 , we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer INLINEFORM4 .
Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Module as an auxiliary classification task, where ground-truth visual grounding labels INLINEFORM0 are used to guide the model to focus on meaningful parts of the image to answer each question. To do that, we simply treat the generated attention coefficients INLINEFORM1 as a probability distribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object-level groundings, as shown in Figure FIGREF3 . Sections SECREF4 and SECREF5 provide details about our proposed method to obtain the attention labels and to train the resulting model, respectively.
Mining Attention Supervision from Visual Genome
Visual Genome (VG) BIBREF21 includes the largest VQA dataset currently available, which consists of 1.7M QA pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these region and object annotations provide complementary information. As an example, as shown in Figure FIGREF3 , for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of specific objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual grounding as an auxiliary task for VQA.
For region annotations, we propose a simple heuristic to mine visual groundings: for each INLINEFORM0 we enumerate all the region descriptions of INLINEFORM1 and pick the description INLINEFORM2 that has the most (at least two) overlapped informative words with INLINEFORM3 and INLINEFORM4 . Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in INLINEFORM5 or INLINEFORM6 are the same; (2) Their lemmatizations (using NLTK BIBREF22 ) are the same; (3) Their synsets in WordNet BIBREF23 are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure FIGREF3 (a) illustrates an example of a region-level grounding.
In terms of object annotations, for each image in a INLINEFORM0 triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in INLINEFORM1 or INLINEFORM2 . To score each match, we use the same criteria as region-level groundings. Additionally, if a triplet INLINEFORM3 has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further refinement, selected objects grounding are passed through an intersection over union filter to account for the fact that VG usually includes multiple labels for the same object instance. As a final consideration, for questions related to counting, region-level groundings are discarded after the corresponding object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure FIGREF3 (b) illustrates an example of an object-level grounding.
As a result, combining both region-level and object-level groundings, about 700K out of 1M INLINEFORM0 triplets in VG end up with valid grounding labels. We will make these labels publicly available.
Implementation Details
We build the attention supervision on top of the open-sourced implementation of MCB BIBREF5 and MFB BIBREF12 . Similar to them, We extract the image feature from res5c layer of Resnet-152, resulting in INLINEFORM0 spatial grid ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). We construct our ground-truth visual grounding labels to be INLINEFORM4 glimpse maps per QA pair, where the first map is object-level grounding and the second map is region-level grounding, as discussed in Section SECREF4 . Let INLINEFORM5 be the coordinate of INLINEFORM6 selected object bounding box in the grounding labels, then the mined object-level attention maps INLINEFORM7 are: DISPLAYFORM0
where INLINEFORM0 is the indicator function. Similarly, the region-level attention maps INLINEFORM1 are: DISPLAYFORM0
Afterwards, INLINEFORM0 and INLINEFORM1 are spatially L1-normalized to represent probabilities and concatenated to form INLINEFORM2 .
The model is trained using a multi-task loss, DISPLAYFORM0
where INLINEFORM0 denotes cross-entropy and INLINEFORM1 denotes KL-divergence. INLINEFORM2 corresponds to the learned parameters. INLINEFORM3 is a scalar that weights the loss terms. This scalar decays as a function of the iteration number INLINEFORM4 . In particular, we choose to use a cosine-decay function: DISPLAYFORM0
This is motivated by the fact that the visual grounding labels have some level of subjectivity. As an example, Figure FIGREF11 (second row) shows a case where the learned attention seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what attention to use. It is important to note that, for training samples in VQA-2.0 or VG that do not have region-level or object-level grounding labels, INLINEFORM0 in Equation EQREF6 , so the loss is reduced to the classification term only. In our experiment, INLINEFORM1 is calibrated for each tested model based on the number of training steps. In particular, we choose INLINEFORM2 for all MCB models and INLINEFORM3 for others.
Datasets
VQA-2.0: The VQA-2.0 dataset BIBREF2 consists of 204721 images, with a total of 1.1M questions and 10 crowd-sourced answers per question. There are more than 20 question types, covering a variety of topics and free-form answers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K questions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer INLINEFORM0 given a corresponding image-question pair INLINEFORM1 . As a main advantage with respect to version 1.0 BIBREF2 , for every question VQA-2.0 includes complementary images that lead to different answers, reducing language bias by forcing the model to use the visual information.
Visual Genome: The Visual Genome (VG) dataset BIBREF21 contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from BIBREF5 , where non-informative words in the questions and answers such as “a” and “is” are removed. Afterwards, INLINEFORM0 triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase description and bounding box coordinates.
VQA-HAT: VQA-HAT dataset BIBREF4 contains 58475 human visual attention heat (HAT) maps for INLINEFORM0 triplets in VQA-1.0 training set. Annotators were shown a blurred image, a INLINEFORM1 pair and were asked to “scratch” the image until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect INLINEFORM2 HAT maps for VQA-1.0 validation sets, where each of the 1374 INLINEFORM3 were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model attention, as in BIBREF4 , BIBREF24 .
VQA-X: VQA-X dataset BIBREF24 contains 2000 labeled attention maps in VQA-2.0 validation sets. In contrast to VQA-HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment objects and/or regions that most prominently justify the answer. Hence the attentions are more specific and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in BIBREF4 , BIBREF24 .
Results
We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0
Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
Table TABREF10 also reports the result of an experiment where the decaying factor INLINEFORM0 in Equation EQREF7 is fixed to a value of 1. In this case, the model is able to achieve higher rank-correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the final training steps, which affects the accuracy of the classification module.
Figure FIGREF11 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no-attn model.
Conclusions
In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA systems, while also providing interpretable representations in the form of an explicitly trainable visual attention mechanism. Specifically, as a main result, our experiments provide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previous works by a large margin.
As further contributions, we highlight two relevant insides of the proposed approach. On one side, by using attention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal representation of the model in such a way that it fosters the encoding of interpretable representations of the underlying relations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and object labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.
As future work, we believe that the superior visual grounding provided by the proposed method can play a relevant role to generate natural language explanations to justify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.
Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Foundational Research on Data. | rank-correlation BIBREF25 |
ed11b4ff7ca72dd80a792a6028e16ba20fccff66 | ed11b4ff7ca72dd80a792a6028e16ba20fccff66_0 | Q: How do they obtain region descriptions and object annotations?
Text: Introduction
We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.
The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 .
We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.
From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.
In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.
The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations.
Related Work
Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.
The strategy to combine the textual and visual embeddings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. BIBREF2 propose an element-wise multiplication between image and question embeddings to generate spatial attention map. Fukui et al. BIBREF5 propose multimodal compact bilinear pooling (MCB) to efficiently implement an outer product operator that combines visual and textual representations. Yu et al. BIBREF6 extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efficiently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. BIBREF7 embed the textual question as an intermediate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. BIBREF8 propose a model that learns a set of task-specific neural modules that are jointly trained to answer visual questions.
Following the successful introduction of soft attention in neural machine translation applications BIBREF9 , most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefficients over a set of predefined image regions. These coefficients are then used to weight the embedding of the image regions to obtain a suitable descriptor BIBREF10 , BIBREF11 , BIBREF5 , BIBREF12 , BIBREF6 . More elaborated forms of attention has also been proposed. Xu and Saenko BIBREF13 suggest use word-level embedding to generate attention. Yang et al. BIBREF14 iterates the application of a soft-attention mechanism over the visual input as a way to progressively refine the location of relevant cues to answer the question. Lu et al. BIBREF15 proposes a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a visual guided attention over the input question.
In all the previous cases, the attention mechanism is applied using an unsupervised scheme, where attention coefficients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem BIBREF4 , BIBREF16 , BIBREF17 . Das et al. BIBREF4 compare the image areas selected by humans and state-of-the-art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to answer questions from the VQA dataset BIBREF2 . Interestingly, this study concludes that current machine-generated attention maps exhibit a poor correlation with respect to the human counterpart, suggesting that humans use different visual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the attention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our findings in this work support this hypothesis.
Related to the work in BIBREF4 , Gan et al. BIBREF16 apply a more structured approach to identify the image areas used by humans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA technique with these attention features, they are able to achieve a small boost in performance. Closely related to our approach, Qiao et al. BIBREF17 use the attention labels in the VQA-HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual question. This network generates a set of attention proposals for each image in the VQA dataset, which are used as labels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they differ from our work in the method to integrate attention labels to a VQA model.
VQA Model Structure
Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.
Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features INLINEFORM0 , where INLINEFORM1 is the maximum number of words in the tokenized version of the question and INLINEFORM2 is the dimensionality of the hidden state of the LSTM. Additionally, following BIBREF12 , a question attention mechanism is added that generates question attention coefficients INLINEFORM3 , where INLINEFORM4 is the so-called number of “glimpses”. The purpose of INLINEFORM5 is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use INLINEFORM6 . The weighted question features INLINEFORM7 are then computed using a soft attention mechanism BIBREF9 , which is essentially a weighted sum of the INLINEFORM8 word features followed by a concatenation according to INLINEFORM9 .
Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset BIBREF18 . This generates image features INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefficients. First, question features INLINEFORM4 are tiled as the same spatial shape of INLINEFORM5 . Afterwards, the fusion module models the joint relationship INLINEFORM6 between questions and images, mapping them to a common space INLINEFORM7 . In the simplest case, one can implement the fusion module using either concatenation or Hadamard product BIBREF19 , but more effective pooling schemes can be applied BIBREF5 , BIBREF20 , BIBREF12 , BIBREF6 . The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent relationship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefficient INLINEFORM8 , with which we can obtain attention-weighted visual features INLINEFORM9 . Again, INLINEFORM10 is the number of “glimpses”, where we use INLINEFORM11 .
Classification Module: Using the compact representation of questions INLINEFORM0 and visual information INLINEFORM1 , the classification module applies first the Fusion Module II that provides the feature representation of answers INLINEFORM2 , where INLINEFORM3 is the latent answer space. Afterwards, it computes the logits over a set of predefined candidate answers. Following previous work BIBREF5 , we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer INLINEFORM4 .
Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Module as an auxiliary classification task, where ground-truth visual grounding labels INLINEFORM0 are used to guide the model to focus on meaningful parts of the image to answer each question. To do that, we simply treat the generated attention coefficients INLINEFORM1 as a probability distribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object-level groundings, as shown in Figure FIGREF3 . Sections SECREF4 and SECREF5 provide details about our proposed method to obtain the attention labels and to train the resulting model, respectively.
Mining Attention Supervision from Visual Genome
Visual Genome (VG) BIBREF21 includes the largest VQA dataset currently available, which consists of 1.7M QA pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these region and object annotations provide complementary information. As an example, as shown in Figure FIGREF3 , for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of specific objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual grounding as an auxiliary task for VQA.
For region annotations, we propose a simple heuristic to mine visual groundings: for each INLINEFORM0 we enumerate all the region descriptions of INLINEFORM1 and pick the description INLINEFORM2 that has the most (at least two) overlapped informative words with INLINEFORM3 and INLINEFORM4 . Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in INLINEFORM5 or INLINEFORM6 are the same; (2) Their lemmatizations (using NLTK BIBREF22 ) are the same; (3) Their synsets in WordNet BIBREF23 are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure FIGREF3 (a) illustrates an example of a region-level grounding.
In terms of object annotations, for each image in a INLINEFORM0 triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in INLINEFORM1 or INLINEFORM2 . To score each match, we use the same criteria as region-level groundings. Additionally, if a triplet INLINEFORM3 has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further refinement, selected objects grounding are passed through an intersection over union filter to account for the fact that VG usually includes multiple labels for the same object instance. As a final consideration, for questions related to counting, region-level groundings are discarded after the corresponding object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure FIGREF3 (b) illustrates an example of an object-level grounding.
As a result, combining both region-level and object-level groundings, about 700K out of 1M INLINEFORM0 triplets in VG end up with valid grounding labels. We will make these labels publicly available.
Implementation Details
We build the attention supervision on top of the open-sourced implementation of MCB BIBREF5 and MFB BIBREF12 . Similar to them, We extract the image feature from res5c layer of Resnet-152, resulting in INLINEFORM0 spatial grid ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). We construct our ground-truth visual grounding labels to be INLINEFORM4 glimpse maps per QA pair, where the first map is object-level grounding and the second map is region-level grounding, as discussed in Section SECREF4 . Let INLINEFORM5 be the coordinate of INLINEFORM6 selected object bounding box in the grounding labels, then the mined object-level attention maps INLINEFORM7 are: DISPLAYFORM0
where INLINEFORM0 is the indicator function. Similarly, the region-level attention maps INLINEFORM1 are: DISPLAYFORM0
Afterwards, INLINEFORM0 and INLINEFORM1 are spatially L1-normalized to represent probabilities and concatenated to form INLINEFORM2 .
The model is trained using a multi-task loss, DISPLAYFORM0
where INLINEFORM0 denotes cross-entropy and INLINEFORM1 denotes KL-divergence. INLINEFORM2 corresponds to the learned parameters. INLINEFORM3 is a scalar that weights the loss terms. This scalar decays as a function of the iteration number INLINEFORM4 . In particular, we choose to use a cosine-decay function: DISPLAYFORM0
This is motivated by the fact that the visual grounding labels have some level of subjectivity. As an example, Figure FIGREF11 (second row) shows a case where the learned attention seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what attention to use. It is important to note that, for training samples in VQA-2.0 or VG that do not have region-level or object-level grounding labels, INLINEFORM0 in Equation EQREF6 , so the loss is reduced to the classification term only. In our experiment, INLINEFORM1 is calibrated for each tested model based on the number of training steps. In particular, we choose INLINEFORM2 for all MCB models and INLINEFORM3 for others.
Datasets
VQA-2.0: The VQA-2.0 dataset BIBREF2 consists of 204721 images, with a total of 1.1M questions and 10 crowd-sourced answers per question. There are more than 20 question types, covering a variety of topics and free-form answers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K questions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer INLINEFORM0 given a corresponding image-question pair INLINEFORM1 . As a main advantage with respect to version 1.0 BIBREF2 , for every question VQA-2.0 includes complementary images that lead to different answers, reducing language bias by forcing the model to use the visual information.
Visual Genome: The Visual Genome (VG) dataset BIBREF21 contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from BIBREF5 , where non-informative words in the questions and answers such as “a” and “is” are removed. Afterwards, INLINEFORM0 triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase description and bounding box coordinates.
VQA-HAT: VQA-HAT dataset BIBREF4 contains 58475 human visual attention heat (HAT) maps for INLINEFORM0 triplets in VQA-1.0 training set. Annotators were shown a blurred image, a INLINEFORM1 pair and were asked to “scratch” the image until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect INLINEFORM2 HAT maps for VQA-1.0 validation sets, where each of the 1374 INLINEFORM3 were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model attention, as in BIBREF4 , BIBREF24 .
VQA-X: VQA-X dataset BIBREF24 contains 2000 labeled attention maps in VQA-2.0 validation sets. In contrast to VQA-HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment objects and/or regions that most prominently justify the answer. Hence the attentions are more specific and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in BIBREF4 , BIBREF24 .
Results
We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0
Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
Table TABREF10 also reports the result of an experiment where the decaying factor INLINEFORM0 in Equation EQREF7 is fixed to a value of 1. In this case, the model is able to achieve higher rank-correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the final training steps, which affects the accuracy of the classification module.
Figure FIGREF11 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no-attn model.
Conclusions
In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA systems, while also providing interpretable representations in the form of an explicitly trainable visual attention mechanism. Specifically, as a main result, our experiments provide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previous works by a large margin.
As further contributions, we highlight two relevant insides of the proposed approach. On one side, by using attention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal representation of the model in such a way that it fosters the encoding of interpretable representations of the underlying relations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and object labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.
As future work, we believe that the superior visual grounding provided by the proposed method can play a relevant role to generate natural language explanations to justify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.
Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Foundational Research on Data. | they are available in the Visual Genome dataset |
a48c6d968707bd79469527493a72bfb4ef217007 | a48c6d968707bd79469527493a72bfb4ef217007_0 | Q: Which training dataset allowed for the best generalization to benchmark sets?
Text: Introduction
Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .
In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.
We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.
One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.
In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.
The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough.
Related Work
The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.
Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .
BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).
On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.
BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora.
Experimental Setup
In this section we describe the datasets and model architectures included in the experiments.
Data
We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.
For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.
We describe the source datasets in more detail below.
The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .
The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.
We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.
SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 .
Model and Training Details
We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.
For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.
For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.
For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values.
Experimental Results
Table 4 contains all the experimental results.
Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.
Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.
The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.
The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.
All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.
Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .
To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\rightarrow $ SICK, SNLI $\rightarrow $ MultiNLI and MultiNLI $\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model.
Discussion and Conclusion
In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.
The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.
We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.
Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions.
Acknowledgments
The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113).
The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.
The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. | MultiNLI |
b69897deb5fb80bf2adb44f9cbf6280d747271b3 | b69897deb5fb80bf2adb44f9cbf6280d747271b3_0 | Q: Which model generalized the best?
Text: Introduction
Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .
In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.
We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.
One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.
In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.
The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough.
Related Work
The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.
Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .
BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).
On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.
BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora.
Experimental Setup
In this section we describe the datasets and model architectures included in the experiments.
Data
We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.
For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.
We describe the source datasets in more detail below.
The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .
The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.
We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.
SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 .
Model and Training Details
We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.
For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.
For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.
For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values.
Experimental Results
Table 4 contains all the experimental results.
Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.
Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.
The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.
The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.
All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.
Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .
To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\rightarrow $ SICK, SNLI $\rightarrow $ MultiNLI and MultiNLI $\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model.
Discussion and Conclusion
In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.
The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.
We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.
Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions.
Acknowledgments
The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113).
The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.
The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. | BERT |
ad1f230f10235413d1fe501e414358245b415476 | ad1f230f10235413d1fe501e414358245b415476_0 | Q: Which models were compared?
Text: Introduction
Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .
In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.
We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.
One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.
In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.
The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough.
Related Work
The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.
Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .
BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).
On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.
BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora.
Experimental Setup
In this section we describe the datasets and model architectures included in the experiments.
Data
We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.
For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.
We describe the source datasets in more detail below.
The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .
The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.
We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.
SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 .
Model and Training Details
We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.
For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.
For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.
For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values.
Experimental Results
Table 4 contains all the experimental results.
Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.
Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.
The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.
The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.
All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.
Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .
To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\rightarrow $ SICK, SNLI $\rightarrow $ MultiNLI and MultiNLI $\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model.
Discussion and Conclusion
In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.
The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.
We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.
Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions.
Acknowledgments
The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113).
The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.
The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. | BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT |
0a521541b9e2b5c6d64fb08eb318778eba8ac9f7 | 0a521541b9e2b5c6d64fb08eb318778eba8ac9f7_0 | Q: Which datasets were used?
Text: Introduction
Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .
In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.
We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.
One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.
In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.
The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough.
Related Work
The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.
Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .
BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).
On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.
BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora.
Experimental Setup
In this section we describe the datasets and model architectures included in the experiments.
Data
We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.
For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.
We describe the source datasets in more detail below.
The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .
The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.
We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.
SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 .
Model and Training Details
We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.
For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.
For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.
For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values.
Experimental Results
Table 4 contains all the experimental results.
Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.
Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.
The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.
The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.
All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.
Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .
To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\rightarrow $ SICK, SNLI $\rightarrow $ MultiNLI and MultiNLI $\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model.
Discussion and Conclusion
In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.
The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.
We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.
Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions.
Acknowledgments
The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113).
The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.
The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. | SNLI, MultiNLI and SICK |
11e376f98df42f487298ec747c32d485c845b5cd | 11e376f98df42f487298ec747c32d485c845b5cd_0 | Q: What was the baseline?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | Unanswerable |
284ea817fd79bc10b7a82c88d353e8f8a9d7e93c | 284ea817fd79bc10b7a82c88d353e8f8a9d7e93c_0 | Q: Is the data all in Vietnamese?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | Yes |
c0122190119027dc3eb51f0d4b4483d2dbedc696 | c0122190119027dc3eb51f0d4b4483d2dbedc696_0 | Q: What classifier do they use?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | Stacking method, LSTMCNN, SARNN, simple LSTM bidirectional model, TextCNN |
1ed6acb88954f31b78d2821bb230b722374792ed | 1ed6acb88954f31b78d2821bb230b722374792ed_0 | Q: What is private dashboard?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set). |
5a33ec23b4341584a8079db459d89a4e23420494 | 5a33ec23b4341584a8079db459d89a4e23420494_0 | Q: What is public dashboard?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | Public dashboard where competitors can see their results during competition, on part of the test set (public test set). |
1b9119813ea637974d21862a8ace83bc1acbab8e | 1b9119813ea637974d21862a8ace83bc1acbab8e_0 | Q: What dataset do they use?
Text: Introduction
Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.
In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.
The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.
In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective.
System description
In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique.
System description ::: System overview
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
System description ::: Data pre-processing
Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:
Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input "thíêt kê will be normalized to "thiết kế" as the output.
Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.
Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.
Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.
Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.
RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.
british
System description ::: Models architecture
Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository
The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.
The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.
The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.
The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.
The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.
System description ::: Ensemble method
Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class.
Experiment
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
We experiment 8 types of embedding in total:
comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.
comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.
comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.
roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.
fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).
In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.
For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.
Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.
In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human.
Conclusion
In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.
HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.
british | They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper). |
8abb96b2450ebccfcc5c98772cec3d86cd0f53e0 | 8abb96b2450ebccfcc5c98772cec3d86cd0f53e0_0 | Q: Do the authors report results only on English data?
Text: Introduction
The main motivation of this work has been started with a question "What do people do to maintain their health?"– some people do balanced diet, some do exercise. Among diet plans some people maintain vegetarian diet/vegan diet, among exercises some people do swimming, cycling or yoga. There are people who do both. If we want to know the answers of the following questions– "How many people follow diet?", "How many people do yoga?", "Does yogi follow vegetarian/vegan diet?", may be we could ask our acquainted person but this will provide very few intuition about the data. Nowadays people usually share their interests, thoughts via discussions, tweets, status in social media (i.e. Facebook, Twitter, Instagram etc.). It's huge amount of data and it's not possible to go through all the data manually. We need to mine the data to get overall statistics and then we will also be able to find some interesting correlation of data.
Several works have been done on prediction of social media content BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Prieto et al. proposed a method to extract a set of tweets to estimate and track the incidence of health conditions in society BIBREF5 . Discovering public health topics and themes in tweets had been examined by Prier et al. BIBREF6 . Yoon et al. described a practical approach of content mining to analyze tweet contents and illustrate an application of the approach to the topic of physical activity BIBREF7 .
Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. In this work, we use text mining to mine the Twitter health-related data. Text mining is the application of natural language processing techniques to derive relevant information BIBREF8 . Millions of tweets are generated each day on multifarious issues BIBREF9 . Twitter mining in large scale has been getting a lot of attention last few years. Lin and Ryaboy discussed the evolution of Twitter infrastructure and the development of capabilities for data mining on "big data" BIBREF10 . Pandarachalil et al. provided a scalable and distributed solution using Parallel python framework for Twitter sentiment analysis BIBREF9 . Large-scale Twitter Mining for drug-related adverse events was developed by Bian et al. BIBREF11 .
In this paper, we use parallel and distributed technology Apache Kafka BIBREF12 to handle the large streaming twitter data. The data processing is conducted in parallel with data extraction by integration of Apache Kafka and Spark Streaming. Then we use Topic Modeling to infer semantic structure of the unstructured data (i.e Tweets). Topic Modeling is a text mining technique which automatically discovers the hidden themes from given documents. It is an unsupervised text analytic algorithm that is used for finding the group of words from the given document. We build the model using three different algorithms Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 and infer the topic of tweets. To observe the model behavior, we test the model to infer new tweets. The implication of our work is to annotate unlabeled data using the model and find interesting correlation.
Data Collection
Tweet messages are retrieved from the Twitter source by utilizing the Twitter API and stored in Kafka topics. The Producer API is used to connect the source (i.e. Twitter) to any Kafka topic as a stream of records for a specific category. We fetch data from a source (Twitter), push it to a message queue, and consume it for further analysis. Fig. FIGREF2 shows the overview of Twitter data collection using Kafka.
Apache Kafka
In order to handle the large streaming twitter data, we use parallel and distributed technology for big data framework. In this case, the output of the twitter crawling is queued in messaging system called Apache Kafka. This is a distributed streaming platform created and open sourced by LinkedIn in 2011 BIBREF12 . We write a Producer Client which fetches latest tweets continuously using Twitter API and push them to single node Kafka Broker. There is a Consumer that reads data from Kafka (Fig. FIGREF2 ).
Apache Zookeeper
Apache Zookeeper is a distributed, open-source configuration, synchronization service along with naming registry for distributed applications. Kafka uses Zookeeper to store metadata about the Kafka cluster, as well as consumer client details.
Data Extraction using Tweepy
The twitter data has been crawled using Tweepy which is a Python library for accessing the Twitter API. We use Twitter streaming API to extract 40k tweets (April 17-19, 2019). For the crawling, we focus on several keywords that are related to health. The keywords are processed in a non-case-sensitive way. We use filter to stream all tweets containing the word `yoga', `healthylife', `healthydiet', `diet',`hiking', `swimming', `cycling', `yogi', `fatburn', `weightloss', `pilates', `zumba', `nutritiousfood', `wellness', `fitness', `workout', `vegetarian', `vegan', `lowcarb', `glutenfree', `calorieburn'.
The streaming API returns tweets, as well as several other types of messages (e.g. a tweet deletion notice, user update profile notice, etc), all in JSON format. We use Python libraries json for parsing the data, pandas for data manipulation.
Data Pre-processing
Data pre-processing is one of the key components in many text mining algorithms BIBREF8 . Data cleaning is crucial for generating a useful topic model. We have some prerequisites i.e. we download the stopwords from NLTK (Natural Language Toolkit) and spacy's en model for text pre-processing.
It is noticeable that the parsed full-text tweets have many emails, `RT', newline and extra spaces that is quite distracting. We use Python Regular Expressions (re module) to get rid of them. Then we tokenize each text into a list of words, remove punctuation and unnecessary characters. We use Python Gensim package for further processing. Gensim's simple_preprocess() is used for tokenization and removing punctuation. We use Gensim's Phrases model to build bigrams. Certain parts of English speech, like conjunctions ("for", "or") or the word "the" are meaningless to a topic model. These terms are called stopwords and we remove them from the token list. We use spacy model for lemmatization to keep only noun, adjective, verb, adverb. Stemming words is another common NLP technique to reduce topically similar words to their root. For example, "connect", "connecting", "connected", "connection", "connections" all have similar meanings; stemming reduces those terms to "connect". The Porter stemming algorithm BIBREF16 is the most widely used method.
Methodology
We use Twitter health-related data for this analysis. In subsections [subsec:3.1]3.1, [subsec:3.2]3.2, [subsec:3.3]3.3, and [subsec:3.4]3.4 elaborately present how we can infer the meaning of unstructured data. Subsection [subsec:3.5]3.5 shows how we do manual annotation for ground truth comparison. Fig. FIGREF6 shows the overall pipeline of correlation mining.
Construct document-term matrix
The result of the data cleaning stage is texts, a tokenized, stopped, stemmed and lemmatized list of words from a single tweet. To understand how frequently each term occurs within each tweet, we construct a document-term matrix using Gensim's Dictionary() function. Gensim's doc2bow() function converts dictionary into a bag-of-words. In the bag-of-words model, each tweet is represented by a vector in a m-dimensional coordinate space, where m is number of unique terms across all tweets. This set of terms is called the corpus vocabulary.
Topic Modeling
Topic modeling is a text mining technique which provides methods for identifying co-occurring keywords to summarize collections of textual information. This is used to analyze collections of documents, each of which is represented as a mixture of topics, where each topic is a probability distribution over words BIBREF17 . Applying these models to a document collection involves estimating the topic distributions and the weight each topic receives in each document. A number of algorithms exist for solving this problem. We use three unsupervised machine learning algorithms to explore the topics of the tweets: Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 . Fig. FIGREF7 shows the general idea of topic modeling methodology. Each tweet is considered as a document. LSA, NMF, and LDA use Bag of Words (BoW) model, which results in a term-document matrix (occurrence of terms in a document). Rows represent terms (words) and columns represent documents (tweets). After completing topic modeling, we identify the groups of co-occurring words in tweets. These group co-occurring related words makes "topics".
LSA (Latent Semantic Analysis) BIBREF13 is also known as LSI (Latent Semantic Index). It learns latent topics by performing a matrix decomposition on the document-term matrix using Singular Value Decomposition (SVD) BIBREF18 . After corpus creation in [subsec:3.1]Subsection 3.1, we generate an LSA model using Gensim.
Non-negative Matrix Factorization (NMF) BIBREF14 is a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of non-negative data vectors. It is a matrix factorization method where we constrain the matrices to be non-negative.
We apply Term Weighting with term frequency-inverse document frequency (TF-IDF) BIBREF19 to improve the usefulness of the document-term matrix (created in [subsec:3.1]Subsection 3.1) by giving more weight to the more "important" terms. In Scikit-learn, we can generate at TF-IDF weighted document-term matrix by using TfidfVectorizer. We import the NMF model class from sklearn.decomposition and fit the topic model to tweets.
Latent Dirichlet Allocation (LDA) BIBREF15 is widely used for identifying the topics in a set of documents, building on Probabilistic Latent Semantic Analysis (PLSI) BIBREF20 . LDA considers each document as a collection of topics in a certain proportion and each topic as a collection of keywords in a certain proportion. We provide LDA the optimal number of topics, it rearranges the topics' distribution within the documents and keywords' distribution within the topics to obtain a good composition of topic-keywords distribution.
We have corpus generated in [subsec:3.1]Subsection 3.1 to train the LDA model. In addition to the corpus and dictionary, we provide the number of topics as well.
Optimal number of Topics
Topic modeling is an unsupervised learning, so the set of possible topics are unknown. To find out the optimal number of topic, we build many LSA, NMF, LDA models with different values of number of topics (k) and pick the one that gives the highest coherence score. Choosing a `k' that marks the end of a rapid growth of topic coherence usually offers meaningful and interpretable topics.
We use Gensim's coherencemodel to calculate topic coherence for topic models (LSA and LDA). For NMF, we use a topic coherence measure called TC-W2V. This measure relies on the use of a word embedding model constructed from the corpus. So in this step, we use the Gensim implementation of Word2Vec BIBREF21 to build a Word2Vec model based on the collection of tweets.
We achieve the highest coherence score = 0.4495 when the number of topics is 2 for LSA, for NMF the highest coherence value is 0.6433 for K = 4, and for LDA we also get number of topics is 4 with the highest coherence score which is 0.3871 (see Fig. FIGREF8 ).
For our dataset, we picked k = 2, 4, and 4 with the highest coherence value for LSA, NMF, and LDA correspondingly (Fig. FIGREF8 ). Table TABREF13 shows the topics and top-10 keywords of the corresponding topic. We get more informative and understandable topics using LDA model than LSA. LSA decomposed matrix is a highly dense matrix, so it is difficult to index individual dimension. LSA is unable to capture the multiple meanings of words. It offers lower accuracy than LDA.
In case of NMF, we observe same keywords are repeated in multiple topics. Keywords "go", "day" both are repeated in Topic 2, Topic 3, and Topic 4 (Table TABREF13 ). In Table TABREF13 keyword "yoga" has been found both in Topic 1 and Topic 4. We also notice that keyword "eat" is in Topic 2 and Topic 3 (Table TABREF13 ). If the same keywords being repeated in multiple topics, it is probably a sign that the `k' is large though we achieve the highest coherence score in NMF for k=4.
We use LDA model for our further analysis. Because LDA is good in identifying coherent topics where as NMF usually gives incoherent topics. However, in the average case NMF and LDA are similar but LDA is more consistent.
Topic Inference
After doing topic modeling using three different method LSA, NMF, and LDA, we use LDA for further analysis i.e. to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet of training data. To observe the model behavior on new tweets those are not included in training set, we follow the same procedure to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet on testing data. Table TABREF30 shows some tweets and corresponding dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet.
Manual Annotation
To calculate the accuracy of model in comparison with ground truth label, we selected top 500 tweets from train dataset (40k tweets). We extracted 500 new tweets (22 April, 2019) as a test dataset. We did manual annotation both for train and test data by choosing one topic among the 4 topics generated from LDA model (7th, 8th, 9th, and 10th columns of Table TABREF13 ) for each tweet based on the intent of the tweet. Consider the following two tweets:
Tweet 1: Learning some traditional yoga with my good friend.
Tweet 2: Why You Should #LiftWeights to Lose #BellyFat #Fitness #core #abs #diet #gym #bodybuilding #workout #yoga
The intention of Tweet 1 is yoga activity (i.e. learning yoga). Tweet 2 is more about weight lifting to reduce belly fat. This tweet is related to workout. When we do manual annotation, we assign Topic 2 in Tweet 1, and Topic 1 in Tweet 2. It's not wise to assign Topic 2 for both tweets based on the keyword "yoga". During annotation, we focus on functionality of tweets.
Visualization
We use LDAvis BIBREF22 , a web-based interactive visualization of topics estimated using LDA. Gensim's pyLDAVis is the most commonly used visualization tool to visualize the information contained in a topic model. In Fig. FIGREF21 , each bubble on the left-hand side plot represents a topic. The larger the bubble, the more prevalent is that topic. A good topic model has fairly big, non-overlapping bubbles scattered throughout the chart instead of being clustered in one quadrant. A model with too many topics, is typically have many overlaps, small sized bubbles clustered in one region of the chart. In right hand side, the words represent the salient keywords.
If we move the cursor over one of the bubbles (Fig. FIGREF21 ), the words and bars on the right-hand side have been updated and top-30 salient keywords that form the selected topic and their estimated term frequencies are shown.
We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords "vegan", "yoga", "job", "every_woman" having the highest term frequency. We can infer different things from the topic that "women usually practice yoga more than men", "women teach yoga and take it as a job", "Yogi follow vegan diet". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'.
Topic Frequency Distribution
Each tweet is composed of multiple topics. But, typically only one of the topics is dominant. We extract the dominant and 2nd dominant topic for each tweet and show the weight of the topic (percentage of contribution in each tweet) and the corresponding keywords.
We plot the frequency of each topic's distribution on tweets in histogram. Fig. FIGREF25 shows the dominant topics' frequency and Fig. FIGREF25 shows the 2nd dominant topics' frequency on tweets. From Fig. FIGREF25 we observe that Topic 1 became either the dominant topic or the 2nd dominant topic for most of the tweets. 7th column of Table TABREF13 shows the corresponding top-10 keywords of Topic 1.
Comparison with Ground Truth
To compare with ground truth, we gradually increased the size of dataset 100, 200, 300, 400, and 500 tweets from train data and test data (new tweets) and did manual annotation both for train/test data based on functionality of tweets (described in [subsec:3.5]Subsection 3.5).
For accuracy calculation, we consider the dominant topic only. We achieved 66% train accuracy and 51% test accuracy when the size of dataset is 500 (Fig. FIGREF28 ). We did baseline implementation with random inference by running multiple times with different seeds and took the average accuracy. For dataset 500, the accuracy converged towards 25% which is reasonable as we have 4 topics.
Observation and Future Work
In Table TABREF30 , we show some observations. For the tweets in 1st and 2nd row (Table TABREF30 ), we observed understandable topic. We also noticed misleading topic and unrelated topic for few tweets (3rd and 4th row of Table TABREF30 ).
In the 1st row of Table TABREF30 , we show a tweet from train data and we got Topic 2 as a dominant topic which has 61% of contribution in this tweet. Topic 1 is 2nd dominant topic and 18% contribution here.
2nd row of Table TABREF30 shows a tweet from test set. We found Topic 2 as a dominant topic with 33% of contribution and Topic 4 as 2nd dominant topic with 32% contribution in this tweet.
In the 3rd (Table TABREF30 ), we have a tweet from test data and we got Topic 2 as a dominant topic which has 43% of contribution in this tweet. Topic 3 is 2nd dominant with 23% contribution which is misleading topic. The model misinterprets the words `water in hand' and infers topic which has keywords "swimming, swim, pool". But the model should infer more reasonable topic (Topic 1 which has keywords "diet, workout") here.
We got Topic 2 as dominant topic for the tweet in 4th row (Table TABREF30 ) which is unrelated topic for this tweet and most relevant topic of this tweet (Topic 2) as 2nd dominant topic. We think during accuracy comparison with ground truth 2nd dominant topic might be considered.
In future, we will extract more tweets and train the model and observe the model behavior on test data. As we found misleading and unrelated topic in test cases, it is important to understand the reasons behind the predictions. We will incorporate Local Interpretable model-agnostic Explanation (LIME) BIBREF23 method for the explanation of model predictions. We will also do predictive causality analysis on tweets.
Conclusions
It is challenging to analyze social media data for different application purpose. In this work, we explored Twitter health-related data, inferred topic using topic modeling (i.e. LSA, NMF, LDA), observed model behavior on new tweets, compared train/test accuracy with ground truth, employed different visualizations after information integration and discovered interesting correlation (Yoga-Veganism) in data. In future, we will incorporate Local Interpretable model-agnostic Explanation (LIME) method to understand model interpretability. | Yes |
f52ec4d68de91dba66668f0affc198706949ff90 | f52ec4d68de91dba66668f0affc198706949ff90_0 | Q: What other interesting correlations are observed?
Text: Introduction
The main motivation of this work has been started with a question "What do people do to maintain their health?"– some people do balanced diet, some do exercise. Among diet plans some people maintain vegetarian diet/vegan diet, among exercises some people do swimming, cycling or yoga. There are people who do both. If we want to know the answers of the following questions– "How many people follow diet?", "How many people do yoga?", "Does yogi follow vegetarian/vegan diet?", may be we could ask our acquainted person but this will provide very few intuition about the data. Nowadays people usually share their interests, thoughts via discussions, tweets, status in social media (i.e. Facebook, Twitter, Instagram etc.). It's huge amount of data and it's not possible to go through all the data manually. We need to mine the data to get overall statistics and then we will also be able to find some interesting correlation of data.
Several works have been done on prediction of social media content BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Prieto et al. proposed a method to extract a set of tweets to estimate and track the incidence of health conditions in society BIBREF5 . Discovering public health topics and themes in tweets had been examined by Prier et al. BIBREF6 . Yoon et al. described a practical approach of content mining to analyze tweet contents and illustrate an application of the approach to the topic of physical activity BIBREF7 .
Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. In this work, we use text mining to mine the Twitter health-related data. Text mining is the application of natural language processing techniques to derive relevant information BIBREF8 . Millions of tweets are generated each day on multifarious issues BIBREF9 . Twitter mining in large scale has been getting a lot of attention last few years. Lin and Ryaboy discussed the evolution of Twitter infrastructure and the development of capabilities for data mining on "big data" BIBREF10 . Pandarachalil et al. provided a scalable and distributed solution using Parallel python framework for Twitter sentiment analysis BIBREF9 . Large-scale Twitter Mining for drug-related adverse events was developed by Bian et al. BIBREF11 .
In this paper, we use parallel and distributed technology Apache Kafka BIBREF12 to handle the large streaming twitter data. The data processing is conducted in parallel with data extraction by integration of Apache Kafka and Spark Streaming. Then we use Topic Modeling to infer semantic structure of the unstructured data (i.e Tweets). Topic Modeling is a text mining technique which automatically discovers the hidden themes from given documents. It is an unsupervised text analytic algorithm that is used for finding the group of words from the given document. We build the model using three different algorithms Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 and infer the topic of tweets. To observe the model behavior, we test the model to infer new tweets. The implication of our work is to annotate unlabeled data using the model and find interesting correlation.
Data Collection
Tweet messages are retrieved from the Twitter source by utilizing the Twitter API and stored in Kafka topics. The Producer API is used to connect the source (i.e. Twitter) to any Kafka topic as a stream of records for a specific category. We fetch data from a source (Twitter), push it to a message queue, and consume it for further analysis. Fig. FIGREF2 shows the overview of Twitter data collection using Kafka.
Apache Kafka
In order to handle the large streaming twitter data, we use parallel and distributed technology for big data framework. In this case, the output of the twitter crawling is queued in messaging system called Apache Kafka. This is a distributed streaming platform created and open sourced by LinkedIn in 2011 BIBREF12 . We write a Producer Client which fetches latest tweets continuously using Twitter API and push them to single node Kafka Broker. There is a Consumer that reads data from Kafka (Fig. FIGREF2 ).
Apache Zookeeper
Apache Zookeeper is a distributed, open-source configuration, synchronization service along with naming registry for distributed applications. Kafka uses Zookeeper to store metadata about the Kafka cluster, as well as consumer client details.
Data Extraction using Tweepy
The twitter data has been crawled using Tweepy which is a Python library for accessing the Twitter API. We use Twitter streaming API to extract 40k tweets (April 17-19, 2019). For the crawling, we focus on several keywords that are related to health. The keywords are processed in a non-case-sensitive way. We use filter to stream all tweets containing the word `yoga', `healthylife', `healthydiet', `diet',`hiking', `swimming', `cycling', `yogi', `fatburn', `weightloss', `pilates', `zumba', `nutritiousfood', `wellness', `fitness', `workout', `vegetarian', `vegan', `lowcarb', `glutenfree', `calorieburn'.
The streaming API returns tweets, as well as several other types of messages (e.g. a tweet deletion notice, user update profile notice, etc), all in JSON format. We use Python libraries json for parsing the data, pandas for data manipulation.
Data Pre-processing
Data pre-processing is one of the key components in many text mining algorithms BIBREF8 . Data cleaning is crucial for generating a useful topic model. We have some prerequisites i.e. we download the stopwords from NLTK (Natural Language Toolkit) and spacy's en model for text pre-processing.
It is noticeable that the parsed full-text tweets have many emails, `RT', newline and extra spaces that is quite distracting. We use Python Regular Expressions (re module) to get rid of them. Then we tokenize each text into a list of words, remove punctuation and unnecessary characters. We use Python Gensim package for further processing. Gensim's simple_preprocess() is used for tokenization and removing punctuation. We use Gensim's Phrases model to build bigrams. Certain parts of English speech, like conjunctions ("for", "or") or the word "the" are meaningless to a topic model. These terms are called stopwords and we remove them from the token list. We use spacy model for lemmatization to keep only noun, adjective, verb, adverb. Stemming words is another common NLP technique to reduce topically similar words to their root. For example, "connect", "connecting", "connected", "connection", "connections" all have similar meanings; stemming reduces those terms to "connect". The Porter stemming algorithm BIBREF16 is the most widely used method.
Methodology
We use Twitter health-related data for this analysis. In subsections [subsec:3.1]3.1, [subsec:3.2]3.2, [subsec:3.3]3.3, and [subsec:3.4]3.4 elaborately present how we can infer the meaning of unstructured data. Subsection [subsec:3.5]3.5 shows how we do manual annotation for ground truth comparison. Fig. FIGREF6 shows the overall pipeline of correlation mining.
Construct document-term matrix
The result of the data cleaning stage is texts, a tokenized, stopped, stemmed and lemmatized list of words from a single tweet. To understand how frequently each term occurs within each tweet, we construct a document-term matrix using Gensim's Dictionary() function. Gensim's doc2bow() function converts dictionary into a bag-of-words. In the bag-of-words model, each tweet is represented by a vector in a m-dimensional coordinate space, where m is number of unique terms across all tweets. This set of terms is called the corpus vocabulary.
Topic Modeling
Topic modeling is a text mining technique which provides methods for identifying co-occurring keywords to summarize collections of textual information. This is used to analyze collections of documents, each of which is represented as a mixture of topics, where each topic is a probability distribution over words BIBREF17 . Applying these models to a document collection involves estimating the topic distributions and the weight each topic receives in each document. A number of algorithms exist for solving this problem. We use three unsupervised machine learning algorithms to explore the topics of the tweets: Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 . Fig. FIGREF7 shows the general idea of topic modeling methodology. Each tweet is considered as a document. LSA, NMF, and LDA use Bag of Words (BoW) model, which results in a term-document matrix (occurrence of terms in a document). Rows represent terms (words) and columns represent documents (tweets). After completing topic modeling, we identify the groups of co-occurring words in tweets. These group co-occurring related words makes "topics".
LSA (Latent Semantic Analysis) BIBREF13 is also known as LSI (Latent Semantic Index). It learns latent topics by performing a matrix decomposition on the document-term matrix using Singular Value Decomposition (SVD) BIBREF18 . After corpus creation in [subsec:3.1]Subsection 3.1, we generate an LSA model using Gensim.
Non-negative Matrix Factorization (NMF) BIBREF14 is a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of non-negative data vectors. It is a matrix factorization method where we constrain the matrices to be non-negative.
We apply Term Weighting with term frequency-inverse document frequency (TF-IDF) BIBREF19 to improve the usefulness of the document-term matrix (created in [subsec:3.1]Subsection 3.1) by giving more weight to the more "important" terms. In Scikit-learn, we can generate at TF-IDF weighted document-term matrix by using TfidfVectorizer. We import the NMF model class from sklearn.decomposition and fit the topic model to tweets.
Latent Dirichlet Allocation (LDA) BIBREF15 is widely used for identifying the topics in a set of documents, building on Probabilistic Latent Semantic Analysis (PLSI) BIBREF20 . LDA considers each document as a collection of topics in a certain proportion and each topic as a collection of keywords in a certain proportion. We provide LDA the optimal number of topics, it rearranges the topics' distribution within the documents and keywords' distribution within the topics to obtain a good composition of topic-keywords distribution.
We have corpus generated in [subsec:3.1]Subsection 3.1 to train the LDA model. In addition to the corpus and dictionary, we provide the number of topics as well.
Optimal number of Topics
Topic modeling is an unsupervised learning, so the set of possible topics are unknown. To find out the optimal number of topic, we build many LSA, NMF, LDA models with different values of number of topics (k) and pick the one that gives the highest coherence score. Choosing a `k' that marks the end of a rapid growth of topic coherence usually offers meaningful and interpretable topics.
We use Gensim's coherencemodel to calculate topic coherence for topic models (LSA and LDA). For NMF, we use a topic coherence measure called TC-W2V. This measure relies on the use of a word embedding model constructed from the corpus. So in this step, we use the Gensim implementation of Word2Vec BIBREF21 to build a Word2Vec model based on the collection of tweets.
We achieve the highest coherence score = 0.4495 when the number of topics is 2 for LSA, for NMF the highest coherence value is 0.6433 for K = 4, and for LDA we also get number of topics is 4 with the highest coherence score which is 0.3871 (see Fig. FIGREF8 ).
For our dataset, we picked k = 2, 4, and 4 with the highest coherence value for LSA, NMF, and LDA correspondingly (Fig. FIGREF8 ). Table TABREF13 shows the topics and top-10 keywords of the corresponding topic. We get more informative and understandable topics using LDA model than LSA. LSA decomposed matrix is a highly dense matrix, so it is difficult to index individual dimension. LSA is unable to capture the multiple meanings of words. It offers lower accuracy than LDA.
In case of NMF, we observe same keywords are repeated in multiple topics. Keywords "go", "day" both are repeated in Topic 2, Topic 3, and Topic 4 (Table TABREF13 ). In Table TABREF13 keyword "yoga" has been found both in Topic 1 and Topic 4. We also notice that keyword "eat" is in Topic 2 and Topic 3 (Table TABREF13 ). If the same keywords being repeated in multiple topics, it is probably a sign that the `k' is large though we achieve the highest coherence score in NMF for k=4.
We use LDA model for our further analysis. Because LDA is good in identifying coherent topics where as NMF usually gives incoherent topics. However, in the average case NMF and LDA are similar but LDA is more consistent.
Topic Inference
After doing topic modeling using three different method LSA, NMF, and LDA, we use LDA for further analysis i.e. to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet of training data. To observe the model behavior on new tweets those are not included in training set, we follow the same procedure to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet on testing data. Table TABREF30 shows some tweets and corresponding dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet.
Manual Annotation
To calculate the accuracy of model in comparison with ground truth label, we selected top 500 tweets from train dataset (40k tweets). We extracted 500 new tweets (22 April, 2019) as a test dataset. We did manual annotation both for train and test data by choosing one topic among the 4 topics generated from LDA model (7th, 8th, 9th, and 10th columns of Table TABREF13 ) for each tweet based on the intent of the tweet. Consider the following two tweets:
Tweet 1: Learning some traditional yoga with my good friend.
Tweet 2: Why You Should #LiftWeights to Lose #BellyFat #Fitness #core #abs #diet #gym #bodybuilding #workout #yoga
The intention of Tweet 1 is yoga activity (i.e. learning yoga). Tweet 2 is more about weight lifting to reduce belly fat. This tweet is related to workout. When we do manual annotation, we assign Topic 2 in Tweet 1, and Topic 1 in Tweet 2. It's not wise to assign Topic 2 for both tweets based on the keyword "yoga". During annotation, we focus on functionality of tweets.
Visualization
We use LDAvis BIBREF22 , a web-based interactive visualization of topics estimated using LDA. Gensim's pyLDAVis is the most commonly used visualization tool to visualize the information contained in a topic model. In Fig. FIGREF21 , each bubble on the left-hand side plot represents a topic. The larger the bubble, the more prevalent is that topic. A good topic model has fairly big, non-overlapping bubbles scattered throughout the chart instead of being clustered in one quadrant. A model with too many topics, is typically have many overlaps, small sized bubbles clustered in one region of the chart. In right hand side, the words represent the salient keywords.
If we move the cursor over one of the bubbles (Fig. FIGREF21 ), the words and bars on the right-hand side have been updated and top-30 salient keywords that form the selected topic and their estimated term frequencies are shown.
We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords "vegan", "yoga", "job", "every_woman" having the highest term frequency. We can infer different things from the topic that "women usually practice yoga more than men", "women teach yoga and take it as a job", "Yogi follow vegan diet". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'.
Topic Frequency Distribution
Each tweet is composed of multiple topics. But, typically only one of the topics is dominant. We extract the dominant and 2nd dominant topic for each tweet and show the weight of the topic (percentage of contribution in each tweet) and the corresponding keywords.
We plot the frequency of each topic's distribution on tweets in histogram. Fig. FIGREF25 shows the dominant topics' frequency and Fig. FIGREF25 shows the 2nd dominant topics' frequency on tweets. From Fig. FIGREF25 we observe that Topic 1 became either the dominant topic or the 2nd dominant topic for most of the tweets. 7th column of Table TABREF13 shows the corresponding top-10 keywords of Topic 1.
Comparison with Ground Truth
To compare with ground truth, we gradually increased the size of dataset 100, 200, 300, 400, and 500 tweets from train data and test data (new tweets) and did manual annotation both for train/test data based on functionality of tweets (described in [subsec:3.5]Subsection 3.5).
For accuracy calculation, we consider the dominant topic only. We achieved 66% train accuracy and 51% test accuracy when the size of dataset is 500 (Fig. FIGREF28 ). We did baseline implementation with random inference by running multiple times with different seeds and took the average accuracy. For dataset 500, the accuracy converged towards 25% which is reasonable as we have 4 topics.
Observation and Future Work
In Table TABREF30 , we show some observations. For the tweets in 1st and 2nd row (Table TABREF30 ), we observed understandable topic. We also noticed misleading topic and unrelated topic for few tweets (3rd and 4th row of Table TABREF30 ).
In the 1st row of Table TABREF30 , we show a tweet from train data and we got Topic 2 as a dominant topic which has 61% of contribution in this tweet. Topic 1 is 2nd dominant topic and 18% contribution here.
2nd row of Table TABREF30 shows a tweet from test set. We found Topic 2 as a dominant topic with 33% of contribution and Topic 4 as 2nd dominant topic with 32% contribution in this tweet.
In the 3rd (Table TABREF30 ), we have a tweet from test data and we got Topic 2 as a dominant topic which has 43% of contribution in this tweet. Topic 3 is 2nd dominant with 23% contribution which is misleading topic. The model misinterprets the words `water in hand' and infers topic which has keywords "swimming, swim, pool". But the model should infer more reasonable topic (Topic 1 which has keywords "diet, workout") here.
We got Topic 2 as dominant topic for the tweet in 4th row (Table TABREF30 ) which is unrelated topic for this tweet and most relevant topic of this tweet (Topic 2) as 2nd dominant topic. We think during accuracy comparison with ground truth 2nd dominant topic might be considered.
In future, we will extract more tweets and train the model and observe the model behavior on test data. As we found misleading and unrelated topic in test cases, it is important to understand the reasons behind the predictions. We will incorporate Local Interpretable model-agnostic Explanation (LIME) BIBREF23 method for the explanation of model predictions. We will also do predictive causality analysis on tweets.
Conclusions
It is challenging to analyze social media data for different application purpose. In this work, we explored Twitter health-related data, inferred topic using topic modeling (i.e. LSA, NMF, LDA), observed model behavior on new tweets, compared train/test accuracy with ground truth, employed different visualizations after information integration and discovered interesting correlation (Yoga-Veganism) in data. In future, we will incorporate Local Interpretable model-agnostic Explanation (LIME) method to understand model interpretability. | Women-Yoga |
225a567eeb2698a9d3f1024a8b270313a6d15f82 | 225a567eeb2698a9d3f1024a8b270313a6d15f82_0 | Q: what were the baselines?
Text: Introduction
Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.
A typical approach, used implicitly in information retrieval (and its extensions, like IR-based Question Answering systems BIBREF0 ), is to determine evidence relevancy by a keyword overlap feature (like tf-idf or BM-25 BIBREF1 ) and prune the evidence by the relevancy score. On the other hand, textual entailment systems that seek to confirm hypotheses based on evidence BIBREF2 BIBREF3 BIBREF4 are typically provided with only a single piece of evidence or only evidence pre-determined as relevant, and are often restricted to short and simple sentences without open-domain named entity occurences. In this work, we seek to fuse information retrieval and textual entaiment recognition by defining the Hypothesis Evaluation task as deciding the truth value of a hypothesis by integrating numerous pieces of evidence, not all of it equally relevant.
As a specific instance, we introduce the Argus Yes/No Question Answering task. The problem is, given a real-world event binary question like Did Donald Trump announce he is running for president? and numerous retrieved news article fragments as evidence, to determine the answer for the question. Our research is motivated by the Argus automatic reporting system for the Augur prediction market platform. BIBREF5 Therefore, we consider the question answering task within the constraints of a practical scenario that has limited available dataset and only minimum supervision. Hence, authentic news sentences are the evidence (with noise like segmentation errors, irrelevant participial phrases, etc.), and whereas we have gold standard for the correct answers, the model must do without explicit supervision on which individual evidence snippets are relevant and what do they entail.
To this end, we introduce an open dataset of questions and newspaper evidence, and a neural model within the Sentence Pair Scoring framework BIBREF6 that (A) learns sentence embeddings for the question and evidence, (B) the embeddings represent both relevance and entailment characteristics as linear classifier inputs, and (C) the model aggregates all available evidence to produce a binary signal as the answer, which is the only training supervision.
We also evaluate our model on a related task that concerns ranking answers of multiple-choice questions given a set of evidencing sentences. We consider the MCTest dataset and the AI2-8grade/CK12 dataset that we introduce below.
The paper is structured as follows. In Sec. SECREF2 , we formally outline the Argus question answering task, describe the question-evidence dataset, and describe the multiple-choice questions task and datasets. In Sec. SECREF3 , we briefly survey the related work on similar problems, whereas in Sec. SECREF4 we propose our neural models for joint learning of sentence relevance and entailment. We present the results in Sec. SECREF5 and conclude with a summary, model usage recommendations and future work directions in Sec. SECREF6 .
The Hypothesis Evaluation Task
Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus.
Argus Dataset
Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.
To train a machine learning model for the INLINEFORM0 function, we have created a dataset of questions with gold labels, and produced sets of evidence texts from a variety of news paper using a pre-existing IR (information retrieval) component of the Argus system. We release this dataset openly.
To pose a reproducible task for the IR component, the time domain of questions was restricted from September 1, 2014 to September 1, 2015, and topic domain was focused to politics, sports and the stock market. To build the question dataset, we have used several sources:
We asked Amazon Mechanical Turk users to pose questions, together with a golden label and a news article reference. This seeded the dataset with initial, somewhat redundant 250 questions.
We manually extended this dataset by derived questions with reversed polarity (to obtain an opposite answer).
We extended the data with questions autogenerated from 26 templates, pertaining top sporting event winners and US senate or gubernatorial elections.
To build the evidence dataset, we used the Syphon preprocessing component BIBREF5 of the Argus implementation to identify semantic roles of all question tokens and produce the search keywords if a role was assigned to each token. We then used the IR component to query a corpus of newspaper articles, and kept sentences that contained at least 2/3 of all the keywords. Our corpus of articles contained articles from The Guardian (all articles) and from the New York Times (Sports, Politics and Business sections). Furthermore, we scraped partial archive.org historical data out of 35 RSS feeds from CNN, Reuters, BBC International, CBS News, ABC News, c|net, Financial Times, Skynews and the Washington Post.
For the final dataset, we kept only questions where at least a single evidence was found (i.e. we successfuly assigned a role to each token, found some news stories and found at least one sentence with 2/3 of question keywords within). The final size of the dataset is outlined in Fig. FIGREF8 and some examples are shown in Fig. FIGREF9 .
AI2-8grade/CK12 Dataset
The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.
We have combined each answer with the respective question (by substituting the wh-word in the question by each answer) and retrieved evidence sentences for each hypothesis using Solr search in a collection of CK-12 “Concepts B” textbooks. 525 questions attained any supporting evidence, examples are shown in Fig. FIGREF10 .
We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.
MCTest Dataset
The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers.
The dataset is split in two parts, MC-160 and MC-500, based on provenance but similar in quality. We train all models on a joined training set.
The practical setting differs from the Argus task as the MCTest dataset contains relatively restricted vocabulary and well-formed sentences. Furthermore, the goal is to find the single key point in the story to focus on, while in the Argus setting we may have many pieces of evidence supporting an answer; another specific characteristics of MCTest is that it consists of stories where the ordering and proximity of evidence sentences matters.
Related Work
Our primary concern when integrating natural language query with textual evidence is to find sentence-level representations suitable both for relevance weighing and answer prediction.
Sentence-level representations in the retrieval + inference context have been popularly proposed within the Memory Network framework BIBREF10 , but explored just in the form of averaged word embeddings; the task includes only very simple sentences and a small vocabulary. Much more realistic setting is introduced in the Answer Sentence Selection context BIBREF11 BIBREF6 , with state-of-art models using complex deep neural architectures with attention BIBREF12 , but the selection task consists of only retrieval and no inference (answer prediction). A more indirect retrieval task regarding news summarization was investigated by BIBREF13 .
In the entailment context, BIBREF4 introduced a large dataset with single-evidence sentence pairs (Stanford Natural Language Inference, SNLI), but a larger vocabulary and slightly more complicated (but still conservatively formed) sentences. They also proposed baseline recurrent neural model for modeling sentence representations, while word-level attention based models are being studied more recently BIBREF14 BIBREF15 .
In the MCTest text comprehension challenge BIBREF8 , the leading models use complex engineered features ensembling multiple traditional semantic NLP approaches BIBREF16 . The best deep model so far BIBREF17 uses convolutional neural networks for sentence representations, and attention on multiple levels to pick evidencing sentences.
Neural Model
Our approach is to use a sequence of word embeddings to build sentence embeddings for each hypothesis and respective evidence, then use the sentence embeddings to estimate relevance and entailment of each evidence with regard to the respective hypothesis, and finally integrate the evidence to a single answer.
Sentence Embeddings
To produce sentence embeddings, we investigated the neural models proposed in the dataset-sts framework for deep learning of sentence pair scoring functions. BIBREF6
We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 .
The original attn1511 model BIBREF6 (as tuned for the Answer Sentence Selection task) used a softmax attention mechanism that would effectively select only a few key words of the evidence to focus on — for a hypothesis-evidence token INLINEFORM0 scalar attention score INLINEFORM1 , the focus INLINEFORM2 is: INLINEFORM3
A different focus mechanism exhibited better performance in the Hypothesis Evaluation task, modelling per-token attention more independently: INLINEFORM0
We also use relu instead of tanh in the CNNs.
As model input, we use the standard GloVe embeddings BIBREF22 extended with binary inputs denoting token type and overlap with token or bigram in the paired sentence, as described in BIBREF6 . However, we introduce two changes to the word embedding model — we use 50-dimensional embeddings instead of 300-dimensional, and rather than building an adaptable embedding matrix from the training set words preinitialized by GloVe, we use only the top 100 most frequent tokens in the adaptable embedding matrix and use fixed GloVe vectors for all other tokens (including tokens not found in the training set). In preliminary experiments, this improved generalization for highly vocabulary-rich tasks like Argus, while still allowing the high-frequency tokens (like interpunction or conjunctions) to learn semantic operator representations.
As an additional method for producing sentence embeddings, we consider the Ubu. RNN transfer learning method proposed by BIBREF6 where an RNN model (as described above) is trained on the Ubuntu Dialogue task BIBREF23 . The pretrained model weights are used to initialize an RNN model which is then fine-tuned on the Hypothesis Evaluation task. We use the same model as originally proposed (except the aforementioned vocabulary handling modification), with the dot-product scoring used for Ubuntu Dialogue training replaced by MLP point-scores described below.
Evidence Integration
Our main proposed schema for evidence integration is Evidence Weighing. From each pair of hypothesis and evidence embeddings, we produce two INLINEFORM0 predictions using a pair of MLP point-scorers of dataset-sts BIBREF6 with sigmoid activation function. The predictions are interpreted as INLINEFORM1 entailment (0 to 1 as no to yes) and relevance INLINEFORM2 . To integrate the predictions across multiple pieces of evidence, we propose a weighed average model: INLINEFORM3
We do not have access to any explicit labels for the evidence, but we train the model end-to-end with just INLINEFORM0 labels and the formula for INLINEFORM1 is differentiable, carrying over the gradient to the sentence embedding model. This can be thought of as a simple passage-wide attention model.
As a baseline strategy, we also consider Evidence Averaging, where we simply produce a single scalar prediction per hypothesis-evidence pair (using the same strategy as above) and decide the hypothesis simply based on the mean prediction across available evidence.
Finally, following success reported in the Answer Sentence Selection task BIBREF6 , we consider a BM25 Feature combined with Evidence Averaging, where the MLP scorer that produces the pair scalar prediction as above takes an additional BM25 word overlap score input BIBREF1 besides the elementwise embedding comparisons.
Experimental Setup
We implement the differentiable model in the Keras framework BIBREF24 and train the whole network from word embeddings to output evidence-integrated hypothesis label using the binary cross-entropy loss as an objective and the Adam optimization algorithm BIBREF25 . We apply INLINEFORM0 regularization and a INLINEFORM1 dropout.
Following the recommendation of BIBREF6 , we report expected test set question accuracy as determined by average accuracy in 16 independent trainings and with 95% confidence intervals based on the Student's t-distribution.
Evaluation
In Fig. FIGREF26 , we report the model performance on the Argus task, showing that the Ubuntu Dialogue transfer RNN outperforms other proposed models by a large margin. However, a comparison of evidence integration approaches in Fig. FIGREF27 shows that evidence integration is not the major deciding factor and there are no staticially meaningful differences between the evaluated approaches. We measured high correlation between classification and relevance scores with Pearson's INLINEFORM0 , showing that our model does not learn a separate evidence weighing function on this task.
In Fig. FIGREF28 , we look at the model performance on the AI2-8grade/CK12 task, repeating the story of Ubuntu Dialogue transfer RNN dominating other models. However, on this task our proposed evidence weighing scheme improves over simpler approaches — but just on the best model, as shown in Fig. FIGREF29 . On the other hand, the simplest averaging model benefits from at least BM25 information to select relevant evidence, apparently.
For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized).
As expected, our models did badly on the multiple-evidence class of questions — we made no attempt to model information flow across adjacent sentences in our models as this aspect is unique to MCTest in the context of our work.
Interestingly, evidence weighing does play an important role on the MCTest task as shown in Fig. FIGREF31 , significantly boosting model accuracy. This confirms that a mechanism to allocate attention to different sentences is indeed crucial for this task.
Analysis
While we can universally proclaim Ubu. RNN as the best model, we observe many aspects of the Hypothesis Evaluation problem that are shared by the AI2-8grade/CK12 and MCTest tasks, but not by the Argus task.
Our largest surprise lies in the ineffectivity of evidence weighing on the Argus task, since observations of irrelevant passages initially led us to investigate this model. We may also see that non-pretrained RNN does very well on the Argus task while CNN is a better model otherwise.
An aspect that could explain this rift is that the latter two tasks are primarily retrieval based, where we seek to judge each evidence as irrelevant or essentially a paraphrase of the hypothesis. On the other hand, the Argus task is highly semantic and compositional, with the questions often differing just by a presence of negation — recurrent model that can capture long-term dependencies and alter sentence representations based on the presence of negation may represent an essential improvement over an n-gram-like convolutional scheme. We might also attribute the lack of success of evidence weighing in the Argus task to a more conservative scheme of passage retrieval employed in the IR pipeline that produced the dataset. Given the large vocabulary and noise levels in the data, we may also simply require more data to train the evidence weighing properly.
We see from the training vs. test accuracies that RNN-based models (including the word-level attention model) have a strong tendency to overfit on our small datasets, while CNN is much more resilient. While word-level attention seems appealing for such a task, we speculate that we simply might not have enough training data to properly train it. Investigating attention transfer is a point for future work — by our preliminary experiments on multiple datasets, attention models appear more task specific than the basic text comprehension models of memory based RNNs.
One concrete limitation of our models in case of the Argus task is a problem of reconciling particular named entity instances. The more obvious form of this issue is Had Roger Federer beat Martin Cilic in US OPEN 2014? versus an opposite Had Martin Cilic beat Roger Federer in US OPEN 2014? — another form of this problem is reconciling a hypothesis like Will the Royals win the World Series? with evidence Giants Win World Series With Game 7 Victory Over Royals. An abstract embedding of the sentence will not carry over the required information — it is important to explicitly pass and reconcile the roles of multiple named entities which cannot be meaningfully embedded in a GloVe-like semantic vector space.
Conclusion
We have established a general Hypothesis Evaluation task with three datasets of various properties, and shown that neural models can exhibit strong performance (with less hand-crafting effort than non-neural classifiers). We propose an evidence weighing model that is never harmful and improves performance on some tasks. We also demonstrate that simple models can outperform or closely match performance of complex architectures; all the models we consider are task-independent and were successfully used in different contexts than Hypothesis Evaluation BIBREF6 . Our results empirically show that a basic RNN text comprehension model well trained on a large dataset (even if the task is unrelated and vocabulary characteristics are very different) outperforms or matches more complex architectures trained only on the dataset of the task at hand.
Finally, on the MCTest dataset, our best proposed model is better or statistically indistinguishable from the best neural model reported so far BIBREF17 , even though it has a simpler architecture and only a naive attention mechanism.
We would like to draw several recommendations for future research from our findings: (A) encourage usage of basic neural architectures as evaluation baselines; (B) suggest that future research includes models pretrained on large data as baselines; (C) validate complex architectures on tasks with large datasets if they cannot beat baselines on small datasets; and (D) for randomized machine comprehension models (e.g. neural networks with random weight initialization, batch shuffling or probabilistic dropout), report expected test set performance based on multiple independent training runs.
As a general advice for solving complex tasks with small datasets, besides the point (B) above our analysis suggests convolutional networks as the best models regarding the tendency to overfit, unless semantic composionality plays a crucial role in the task; in this scenario, simple averaging-based models are a great start as well. Preinitializing a model also helps against overfitting.
We release our implementation of the Argus task, evidence integration models and processing of all the evaluated datasets as open source.
We believe the next step towards machine comprehension NLP models (based on deep learning but capable of dealing with real-world, large-vocabulary data) will involve research into a better way to deal with entities without available embeddings. When distinguishing specific entities, simple word-level attention mechanisms will not do. A promising approach could extend the flexibility of the final sentence representation, moving from attention mechanism to a memory mechanism by allowing the network to remember a set of “facts” derived from each sentence; related work has been done for example on end-to-end differentiable shift-reduce parsers with LSTM as stack cells BIBREF28 .
Acknowledgments
This work was co-funded by the Augur Project of the Forecast Foundation and financially supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS16/ 084/OHK3/1T/13. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures.”
We'd like to thank Peronet Despeignes of the Augur Project for his support. Carl Burke has provided instructions for searching CK-12 ebooks within the Kaggle challenge. | RNN model, CNN model , RNN-CNN model, attn1511 model, Deep Averaging Network model, avg mean of word embeddings in the sentence with projection matrix |
35b10e0dc2cb4a1a31d5692032dc3fbda933bf7d | 35b10e0dc2cb4a1a31d5692032dc3fbda933bf7d_0 | Q: what is the state of the art for ranking mc test answers?
Text: Introduction
Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.
A typical approach, used implicitly in information retrieval (and its extensions, like IR-based Question Answering systems BIBREF0 ), is to determine evidence relevancy by a keyword overlap feature (like tf-idf or BM-25 BIBREF1 ) and prune the evidence by the relevancy score. On the other hand, textual entailment systems that seek to confirm hypotheses based on evidence BIBREF2 BIBREF3 BIBREF4 are typically provided with only a single piece of evidence or only evidence pre-determined as relevant, and are often restricted to short and simple sentences without open-domain named entity occurences. In this work, we seek to fuse information retrieval and textual entaiment recognition by defining the Hypothesis Evaluation task as deciding the truth value of a hypothesis by integrating numerous pieces of evidence, not all of it equally relevant.
As a specific instance, we introduce the Argus Yes/No Question Answering task. The problem is, given a real-world event binary question like Did Donald Trump announce he is running for president? and numerous retrieved news article fragments as evidence, to determine the answer for the question. Our research is motivated by the Argus automatic reporting system for the Augur prediction market platform. BIBREF5 Therefore, we consider the question answering task within the constraints of a practical scenario that has limited available dataset and only minimum supervision. Hence, authentic news sentences are the evidence (with noise like segmentation errors, irrelevant participial phrases, etc.), and whereas we have gold standard for the correct answers, the model must do without explicit supervision on which individual evidence snippets are relevant and what do they entail.
To this end, we introduce an open dataset of questions and newspaper evidence, and a neural model within the Sentence Pair Scoring framework BIBREF6 that (A) learns sentence embeddings for the question and evidence, (B) the embeddings represent both relevance and entailment characteristics as linear classifier inputs, and (C) the model aggregates all available evidence to produce a binary signal as the answer, which is the only training supervision.
We also evaluate our model on a related task that concerns ranking answers of multiple-choice questions given a set of evidencing sentences. We consider the MCTest dataset and the AI2-8grade/CK12 dataset that we introduce below.
The paper is structured as follows. In Sec. SECREF2 , we formally outline the Argus question answering task, describe the question-evidence dataset, and describe the multiple-choice questions task and datasets. In Sec. SECREF3 , we briefly survey the related work on similar problems, whereas in Sec. SECREF4 we propose our neural models for joint learning of sentence relevance and entailment. We present the results in Sec. SECREF5 and conclude with a summary, model usage recommendations and future work directions in Sec. SECREF6 .
The Hypothesis Evaluation Task
Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus.
Argus Dataset
Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.
To train a machine learning model for the INLINEFORM0 function, we have created a dataset of questions with gold labels, and produced sets of evidence texts from a variety of news paper using a pre-existing IR (information retrieval) component of the Argus system. We release this dataset openly.
To pose a reproducible task for the IR component, the time domain of questions was restricted from September 1, 2014 to September 1, 2015, and topic domain was focused to politics, sports and the stock market. To build the question dataset, we have used several sources:
We asked Amazon Mechanical Turk users to pose questions, together with a golden label and a news article reference. This seeded the dataset with initial, somewhat redundant 250 questions.
We manually extended this dataset by derived questions with reversed polarity (to obtain an opposite answer).
We extended the data with questions autogenerated from 26 templates, pertaining top sporting event winners and US senate or gubernatorial elections.
To build the evidence dataset, we used the Syphon preprocessing component BIBREF5 of the Argus implementation to identify semantic roles of all question tokens and produce the search keywords if a role was assigned to each token. We then used the IR component to query a corpus of newspaper articles, and kept sentences that contained at least 2/3 of all the keywords. Our corpus of articles contained articles from The Guardian (all articles) and from the New York Times (Sports, Politics and Business sections). Furthermore, we scraped partial archive.org historical data out of 35 RSS feeds from CNN, Reuters, BBC International, CBS News, ABC News, c|net, Financial Times, Skynews and the Washington Post.
For the final dataset, we kept only questions where at least a single evidence was found (i.e. we successfuly assigned a role to each token, found some news stories and found at least one sentence with 2/3 of question keywords within). The final size of the dataset is outlined in Fig. FIGREF8 and some examples are shown in Fig. FIGREF9 .
AI2-8grade/CK12 Dataset
The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.
We have combined each answer with the respective question (by substituting the wh-word in the question by each answer) and retrieved evidence sentences for each hypothesis using Solr search in a collection of CK-12 “Concepts B” textbooks. 525 questions attained any supporting evidence, examples are shown in Fig. FIGREF10 .
We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.
MCTest Dataset
The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers.
The dataset is split in two parts, MC-160 and MC-500, based on provenance but similar in quality. We train all models on a joined training set.
The practical setting differs from the Argus task as the MCTest dataset contains relatively restricted vocabulary and well-formed sentences. Furthermore, the goal is to find the single key point in the story to focus on, while in the Argus setting we may have many pieces of evidence supporting an answer; another specific characteristics of MCTest is that it consists of stories where the ordering and proximity of evidence sentences matters.
Related Work
Our primary concern when integrating natural language query with textual evidence is to find sentence-level representations suitable both for relevance weighing and answer prediction.
Sentence-level representations in the retrieval + inference context have been popularly proposed within the Memory Network framework BIBREF10 , but explored just in the form of averaged word embeddings; the task includes only very simple sentences and a small vocabulary. Much more realistic setting is introduced in the Answer Sentence Selection context BIBREF11 BIBREF6 , with state-of-art models using complex deep neural architectures with attention BIBREF12 , but the selection task consists of only retrieval and no inference (answer prediction). A more indirect retrieval task regarding news summarization was investigated by BIBREF13 .
In the entailment context, BIBREF4 introduced a large dataset with single-evidence sentence pairs (Stanford Natural Language Inference, SNLI), but a larger vocabulary and slightly more complicated (but still conservatively formed) sentences. They also proposed baseline recurrent neural model for modeling sentence representations, while word-level attention based models are being studied more recently BIBREF14 BIBREF15 .
In the MCTest text comprehension challenge BIBREF8 , the leading models use complex engineered features ensembling multiple traditional semantic NLP approaches BIBREF16 . The best deep model so far BIBREF17 uses convolutional neural networks for sentence representations, and attention on multiple levels to pick evidencing sentences.
Neural Model
Our approach is to use a sequence of word embeddings to build sentence embeddings for each hypothesis and respective evidence, then use the sentence embeddings to estimate relevance and entailment of each evidence with regard to the respective hypothesis, and finally integrate the evidence to a single answer.
Sentence Embeddings
To produce sentence embeddings, we investigated the neural models proposed in the dataset-sts framework for deep learning of sentence pair scoring functions. BIBREF6
We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 .
The original attn1511 model BIBREF6 (as tuned for the Answer Sentence Selection task) used a softmax attention mechanism that would effectively select only a few key words of the evidence to focus on — for a hypothesis-evidence token INLINEFORM0 scalar attention score INLINEFORM1 , the focus INLINEFORM2 is: INLINEFORM3
A different focus mechanism exhibited better performance in the Hypothesis Evaluation task, modelling per-token attention more independently: INLINEFORM0
We also use relu instead of tanh in the CNNs.
As model input, we use the standard GloVe embeddings BIBREF22 extended with binary inputs denoting token type and overlap with token or bigram in the paired sentence, as described in BIBREF6 . However, we introduce two changes to the word embedding model — we use 50-dimensional embeddings instead of 300-dimensional, and rather than building an adaptable embedding matrix from the training set words preinitialized by GloVe, we use only the top 100 most frequent tokens in the adaptable embedding matrix and use fixed GloVe vectors for all other tokens (including tokens not found in the training set). In preliminary experiments, this improved generalization for highly vocabulary-rich tasks like Argus, while still allowing the high-frequency tokens (like interpunction or conjunctions) to learn semantic operator representations.
As an additional method for producing sentence embeddings, we consider the Ubu. RNN transfer learning method proposed by BIBREF6 where an RNN model (as described above) is trained on the Ubuntu Dialogue task BIBREF23 . The pretrained model weights are used to initialize an RNN model which is then fine-tuned on the Hypothesis Evaluation task. We use the same model as originally proposed (except the aforementioned vocabulary handling modification), with the dot-product scoring used for Ubuntu Dialogue training replaced by MLP point-scores described below.
Evidence Integration
Our main proposed schema for evidence integration is Evidence Weighing. From each pair of hypothesis and evidence embeddings, we produce two INLINEFORM0 predictions using a pair of MLP point-scorers of dataset-sts BIBREF6 with sigmoid activation function. The predictions are interpreted as INLINEFORM1 entailment (0 to 1 as no to yes) and relevance INLINEFORM2 . To integrate the predictions across multiple pieces of evidence, we propose a weighed average model: INLINEFORM3
We do not have access to any explicit labels for the evidence, but we train the model end-to-end with just INLINEFORM0 labels and the formula for INLINEFORM1 is differentiable, carrying over the gradient to the sentence embedding model. This can be thought of as a simple passage-wide attention model.
As a baseline strategy, we also consider Evidence Averaging, where we simply produce a single scalar prediction per hypothesis-evidence pair (using the same strategy as above) and decide the hypothesis simply based on the mean prediction across available evidence.
Finally, following success reported in the Answer Sentence Selection task BIBREF6 , we consider a BM25 Feature combined with Evidence Averaging, where the MLP scorer that produces the pair scalar prediction as above takes an additional BM25 word overlap score input BIBREF1 besides the elementwise embedding comparisons.
Experimental Setup
We implement the differentiable model in the Keras framework BIBREF24 and train the whole network from word embeddings to output evidence-integrated hypothesis label using the binary cross-entropy loss as an objective and the Adam optimization algorithm BIBREF25 . We apply INLINEFORM0 regularization and a INLINEFORM1 dropout.
Following the recommendation of BIBREF6 , we report expected test set question accuracy as determined by average accuracy in 16 independent trainings and with 95% confidence intervals based on the Student's t-distribution.
Evaluation
In Fig. FIGREF26 , we report the model performance on the Argus task, showing that the Ubuntu Dialogue transfer RNN outperforms other proposed models by a large margin. However, a comparison of evidence integration approaches in Fig. FIGREF27 shows that evidence integration is not the major deciding factor and there are no staticially meaningful differences between the evaluated approaches. We measured high correlation between classification and relevance scores with Pearson's INLINEFORM0 , showing that our model does not learn a separate evidence weighing function on this task.
In Fig. FIGREF28 , we look at the model performance on the AI2-8grade/CK12 task, repeating the story of Ubuntu Dialogue transfer RNN dominating other models. However, on this task our proposed evidence weighing scheme improves over simpler approaches — but just on the best model, as shown in Fig. FIGREF29 . On the other hand, the simplest averaging model benefits from at least BM25 information to select relevant evidence, apparently.
For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized).
As expected, our models did badly on the multiple-evidence class of questions — we made no attempt to model information flow across adjacent sentences in our models as this aspect is unique to MCTest in the context of our work.
Interestingly, evidence weighing does play an important role on the MCTest task as shown in Fig. FIGREF31 , significantly boosting model accuracy. This confirms that a mechanism to allocate attention to different sentences is indeed crucial for this task.
Analysis
While we can universally proclaim Ubu. RNN as the best model, we observe many aspects of the Hypothesis Evaluation problem that are shared by the AI2-8grade/CK12 and MCTest tasks, but not by the Argus task.
Our largest surprise lies in the ineffectivity of evidence weighing on the Argus task, since observations of irrelevant passages initially led us to investigate this model. We may also see that non-pretrained RNN does very well on the Argus task while CNN is a better model otherwise.
An aspect that could explain this rift is that the latter two tasks are primarily retrieval based, where we seek to judge each evidence as irrelevant or essentially a paraphrase of the hypothesis. On the other hand, the Argus task is highly semantic and compositional, with the questions often differing just by a presence of negation — recurrent model that can capture long-term dependencies and alter sentence representations based on the presence of negation may represent an essential improvement over an n-gram-like convolutional scheme. We might also attribute the lack of success of evidence weighing in the Argus task to a more conservative scheme of passage retrieval employed in the IR pipeline that produced the dataset. Given the large vocabulary and noise levels in the data, we may also simply require more data to train the evidence weighing properly.
We see from the training vs. test accuracies that RNN-based models (including the word-level attention model) have a strong tendency to overfit on our small datasets, while CNN is much more resilient. While word-level attention seems appealing for such a task, we speculate that we simply might not have enough training data to properly train it. Investigating attention transfer is a point for future work — by our preliminary experiments on multiple datasets, attention models appear more task specific than the basic text comprehension models of memory based RNNs.
One concrete limitation of our models in case of the Argus task is a problem of reconciling particular named entity instances. The more obvious form of this issue is Had Roger Federer beat Martin Cilic in US OPEN 2014? versus an opposite Had Martin Cilic beat Roger Federer in US OPEN 2014? — another form of this problem is reconciling a hypothesis like Will the Royals win the World Series? with evidence Giants Win World Series With Game 7 Victory Over Royals. An abstract embedding of the sentence will not carry over the required information — it is important to explicitly pass and reconcile the roles of multiple named entities which cannot be meaningfully embedded in a GloVe-like semantic vector space.
Conclusion
We have established a general Hypothesis Evaluation task with three datasets of various properties, and shown that neural models can exhibit strong performance (with less hand-crafting effort than non-neural classifiers). We propose an evidence weighing model that is never harmful and improves performance on some tasks. We also demonstrate that simple models can outperform or closely match performance of complex architectures; all the models we consider are task-independent and were successfully used in different contexts than Hypothesis Evaluation BIBREF6 . Our results empirically show that a basic RNN text comprehension model well trained on a large dataset (even if the task is unrelated and vocabulary characteristics are very different) outperforms or matches more complex architectures trained only on the dataset of the task at hand.
Finally, on the MCTest dataset, our best proposed model is better or statistically indistinguishable from the best neural model reported so far BIBREF17 , even though it has a simpler architecture and only a naive attention mechanism.
We would like to draw several recommendations for future research from our findings: (A) encourage usage of basic neural architectures as evaluation baselines; (B) suggest that future research includes models pretrained on large data as baselines; (C) validate complex architectures on tasks with large datasets if they cannot beat baselines on small datasets; and (D) for randomized machine comprehension models (e.g. neural networks with random weight initialization, batch shuffling or probabilistic dropout), report expected test set performance based on multiple independent training runs.
As a general advice for solving complex tasks with small datasets, besides the point (B) above our analysis suggests convolutional networks as the best models regarding the tendency to overfit, unless semantic composionality plays a crucial role in the task; in this scenario, simple averaging-based models are a great start as well. Preinitializing a model also helps against overfitting.
We release our implementation of the Argus task, evidence integration models and processing of all the evaluated datasets as open source.
We believe the next step towards machine comprehension NLP models (based on deep learning but capable of dealing with real-world, large-vocabulary data) will involve research into a better way to deal with entities without available embeddings. When distinguishing specific entities, simple word-level attention mechanisms will not do. A promising approach could extend the flexibility of the final sentence representation, moving from attention mechanism to a memory mechanism by allowing the network to remember a set of “facts” derived from each sentence; related work has been done for example on end-to-end differentiable shift-reduce parsers with LSTM as stack cells BIBREF28 .
Acknowledgments
This work was co-funded by the Augur Project of the Forecast Foundation and financially supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS16/ 084/OHK3/1T/13. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures.”
We'd like to thank Peronet Despeignes of the Augur Project for his support. Carl Burke has provided instructions for searching CK-12 ebooks within the Kaggle challenge. | ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 |
f5eac66c08ebec507c582a2445e99317a83e9ebe | f5eac66c08ebec507c582a2445e99317a83e9ebe_0 | Q: what is the size of the introduced dataset?
Text: Introduction
Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.
A typical approach, used implicitly in information retrieval (and its extensions, like IR-based Question Answering systems BIBREF0 ), is to determine evidence relevancy by a keyword overlap feature (like tf-idf or BM-25 BIBREF1 ) and prune the evidence by the relevancy score. On the other hand, textual entailment systems that seek to confirm hypotheses based on evidence BIBREF2 BIBREF3 BIBREF4 are typically provided with only a single piece of evidence or only evidence pre-determined as relevant, and are often restricted to short and simple sentences without open-domain named entity occurences. In this work, we seek to fuse information retrieval and textual entaiment recognition by defining the Hypothesis Evaluation task as deciding the truth value of a hypothesis by integrating numerous pieces of evidence, not all of it equally relevant.
As a specific instance, we introduce the Argus Yes/No Question Answering task. The problem is, given a real-world event binary question like Did Donald Trump announce he is running for president? and numerous retrieved news article fragments as evidence, to determine the answer for the question. Our research is motivated by the Argus automatic reporting system for the Augur prediction market platform. BIBREF5 Therefore, we consider the question answering task within the constraints of a practical scenario that has limited available dataset and only minimum supervision. Hence, authentic news sentences are the evidence (with noise like segmentation errors, irrelevant participial phrases, etc.), and whereas we have gold standard for the correct answers, the model must do without explicit supervision on which individual evidence snippets are relevant and what do they entail.
To this end, we introduce an open dataset of questions and newspaper evidence, and a neural model within the Sentence Pair Scoring framework BIBREF6 that (A) learns sentence embeddings for the question and evidence, (B) the embeddings represent both relevance and entailment characteristics as linear classifier inputs, and (C) the model aggregates all available evidence to produce a binary signal as the answer, which is the only training supervision.
We also evaluate our model on a related task that concerns ranking answers of multiple-choice questions given a set of evidencing sentences. We consider the MCTest dataset and the AI2-8grade/CK12 dataset that we introduce below.
The paper is structured as follows. In Sec. SECREF2 , we formally outline the Argus question answering task, describe the question-evidence dataset, and describe the multiple-choice questions task and datasets. In Sec. SECREF3 , we briefly survey the related work on similar problems, whereas in Sec. SECREF4 we propose our neural models for joint learning of sentence relevance and entailment. We present the results in Sec. SECREF5 and conclude with a summary, model usage recommendations and future work directions in Sec. SECREF6 .
The Hypothesis Evaluation Task
Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus.
Argus Dataset
Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.
To train a machine learning model for the INLINEFORM0 function, we have created a dataset of questions with gold labels, and produced sets of evidence texts from a variety of news paper using a pre-existing IR (information retrieval) component of the Argus system. We release this dataset openly.
To pose a reproducible task for the IR component, the time domain of questions was restricted from September 1, 2014 to September 1, 2015, and topic domain was focused to politics, sports and the stock market. To build the question dataset, we have used several sources:
We asked Amazon Mechanical Turk users to pose questions, together with a golden label and a news article reference. This seeded the dataset with initial, somewhat redundant 250 questions.
We manually extended this dataset by derived questions with reversed polarity (to obtain an opposite answer).
We extended the data with questions autogenerated from 26 templates, pertaining top sporting event winners and US senate or gubernatorial elections.
To build the evidence dataset, we used the Syphon preprocessing component BIBREF5 of the Argus implementation to identify semantic roles of all question tokens and produce the search keywords if a role was assigned to each token. We then used the IR component to query a corpus of newspaper articles, and kept sentences that contained at least 2/3 of all the keywords. Our corpus of articles contained articles from The Guardian (all articles) and from the New York Times (Sports, Politics and Business sections). Furthermore, we scraped partial archive.org historical data out of 35 RSS feeds from CNN, Reuters, BBC International, CBS News, ABC News, c|net, Financial Times, Skynews and the Washington Post.
For the final dataset, we kept only questions where at least a single evidence was found (i.e. we successfuly assigned a role to each token, found some news stories and found at least one sentence with 2/3 of question keywords within). The final size of the dataset is outlined in Fig. FIGREF8 and some examples are shown in Fig. FIGREF9 .
AI2-8grade/CK12 Dataset
The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.
We have combined each answer with the respective question (by substituting the wh-word in the question by each answer) and retrieved evidence sentences for each hypothesis using Solr search in a collection of CK-12 “Concepts B” textbooks. 525 questions attained any supporting evidence, examples are shown in Fig. FIGREF10 .
We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.
MCTest Dataset
The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers.
The dataset is split in two parts, MC-160 and MC-500, based on provenance but similar in quality. We train all models on a joined training set.
The practical setting differs from the Argus task as the MCTest dataset contains relatively restricted vocabulary and well-formed sentences. Furthermore, the goal is to find the single key point in the story to focus on, while in the Argus setting we may have many pieces of evidence supporting an answer; another specific characteristics of MCTest is that it consists of stories where the ordering and proximity of evidence sentences matters.
Related Work
Our primary concern when integrating natural language query with textual evidence is to find sentence-level representations suitable both for relevance weighing and answer prediction.
Sentence-level representations in the retrieval + inference context have been popularly proposed within the Memory Network framework BIBREF10 , but explored just in the form of averaged word embeddings; the task includes only very simple sentences and a small vocabulary. Much more realistic setting is introduced in the Answer Sentence Selection context BIBREF11 BIBREF6 , with state-of-art models using complex deep neural architectures with attention BIBREF12 , but the selection task consists of only retrieval and no inference (answer prediction). A more indirect retrieval task regarding news summarization was investigated by BIBREF13 .
In the entailment context, BIBREF4 introduced a large dataset with single-evidence sentence pairs (Stanford Natural Language Inference, SNLI), but a larger vocabulary and slightly more complicated (but still conservatively formed) sentences. They also proposed baseline recurrent neural model for modeling sentence representations, while word-level attention based models are being studied more recently BIBREF14 BIBREF15 .
In the MCTest text comprehension challenge BIBREF8 , the leading models use complex engineered features ensembling multiple traditional semantic NLP approaches BIBREF16 . The best deep model so far BIBREF17 uses convolutional neural networks for sentence representations, and attention on multiple levels to pick evidencing sentences.
Neural Model
Our approach is to use a sequence of word embeddings to build sentence embeddings for each hypothesis and respective evidence, then use the sentence embeddings to estimate relevance and entailment of each evidence with regard to the respective hypothesis, and finally integrate the evidence to a single answer.
Sentence Embeddings
To produce sentence embeddings, we investigated the neural models proposed in the dataset-sts framework for deep learning of sentence pair scoring functions. BIBREF6
We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 .
The original attn1511 model BIBREF6 (as tuned for the Answer Sentence Selection task) used a softmax attention mechanism that would effectively select only a few key words of the evidence to focus on — for a hypothesis-evidence token INLINEFORM0 scalar attention score INLINEFORM1 , the focus INLINEFORM2 is: INLINEFORM3
A different focus mechanism exhibited better performance in the Hypothesis Evaluation task, modelling per-token attention more independently: INLINEFORM0
We also use relu instead of tanh in the CNNs.
As model input, we use the standard GloVe embeddings BIBREF22 extended with binary inputs denoting token type and overlap with token or bigram in the paired sentence, as described in BIBREF6 . However, we introduce two changes to the word embedding model — we use 50-dimensional embeddings instead of 300-dimensional, and rather than building an adaptable embedding matrix from the training set words preinitialized by GloVe, we use only the top 100 most frequent tokens in the adaptable embedding matrix and use fixed GloVe vectors for all other tokens (including tokens not found in the training set). In preliminary experiments, this improved generalization for highly vocabulary-rich tasks like Argus, while still allowing the high-frequency tokens (like interpunction or conjunctions) to learn semantic operator representations.
As an additional method for producing sentence embeddings, we consider the Ubu. RNN transfer learning method proposed by BIBREF6 where an RNN model (as described above) is trained on the Ubuntu Dialogue task BIBREF23 . The pretrained model weights are used to initialize an RNN model which is then fine-tuned on the Hypothesis Evaluation task. We use the same model as originally proposed (except the aforementioned vocabulary handling modification), with the dot-product scoring used for Ubuntu Dialogue training replaced by MLP point-scores described below.
Evidence Integration
Our main proposed schema for evidence integration is Evidence Weighing. From each pair of hypothesis and evidence embeddings, we produce two INLINEFORM0 predictions using a pair of MLP point-scorers of dataset-sts BIBREF6 with sigmoid activation function. The predictions are interpreted as INLINEFORM1 entailment (0 to 1 as no to yes) and relevance INLINEFORM2 . To integrate the predictions across multiple pieces of evidence, we propose a weighed average model: INLINEFORM3
We do not have access to any explicit labels for the evidence, but we train the model end-to-end with just INLINEFORM0 labels and the formula for INLINEFORM1 is differentiable, carrying over the gradient to the sentence embedding model. This can be thought of as a simple passage-wide attention model.
As a baseline strategy, we also consider Evidence Averaging, where we simply produce a single scalar prediction per hypothesis-evidence pair (using the same strategy as above) and decide the hypothesis simply based on the mean prediction across available evidence.
Finally, following success reported in the Answer Sentence Selection task BIBREF6 , we consider a BM25 Feature combined with Evidence Averaging, where the MLP scorer that produces the pair scalar prediction as above takes an additional BM25 word overlap score input BIBREF1 besides the elementwise embedding comparisons.
Experimental Setup
We implement the differentiable model in the Keras framework BIBREF24 and train the whole network from word embeddings to output evidence-integrated hypothesis label using the binary cross-entropy loss as an objective and the Adam optimization algorithm BIBREF25 . We apply INLINEFORM0 regularization and a INLINEFORM1 dropout.
Following the recommendation of BIBREF6 , we report expected test set question accuracy as determined by average accuracy in 16 independent trainings and with 95% confidence intervals based on the Student's t-distribution.
Evaluation
In Fig. FIGREF26 , we report the model performance on the Argus task, showing that the Ubuntu Dialogue transfer RNN outperforms other proposed models by a large margin. However, a comparison of evidence integration approaches in Fig. FIGREF27 shows that evidence integration is not the major deciding factor and there are no staticially meaningful differences between the evaluated approaches. We measured high correlation between classification and relevance scores with Pearson's INLINEFORM0 , showing that our model does not learn a separate evidence weighing function on this task.
In Fig. FIGREF28 , we look at the model performance on the AI2-8grade/CK12 task, repeating the story of Ubuntu Dialogue transfer RNN dominating other models. However, on this task our proposed evidence weighing scheme improves over simpler approaches — but just on the best model, as shown in Fig. FIGREF29 . On the other hand, the simplest averaging model benefits from at least BM25 information to select relevant evidence, apparently.
For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized).
As expected, our models did badly on the multiple-evidence class of questions — we made no attempt to model information flow across adjacent sentences in our models as this aspect is unique to MCTest in the context of our work.
Interestingly, evidence weighing does play an important role on the MCTest task as shown in Fig. FIGREF31 , significantly boosting model accuracy. This confirms that a mechanism to allocate attention to different sentences is indeed crucial for this task.
Analysis
While we can universally proclaim Ubu. RNN as the best model, we observe many aspects of the Hypothesis Evaluation problem that are shared by the AI2-8grade/CK12 and MCTest tasks, but not by the Argus task.
Our largest surprise lies in the ineffectivity of evidence weighing on the Argus task, since observations of irrelevant passages initially led us to investigate this model. We may also see that non-pretrained RNN does very well on the Argus task while CNN is a better model otherwise.
An aspect that could explain this rift is that the latter two tasks are primarily retrieval based, where we seek to judge each evidence as irrelevant or essentially a paraphrase of the hypothesis. On the other hand, the Argus task is highly semantic and compositional, with the questions often differing just by a presence of negation — recurrent model that can capture long-term dependencies and alter sentence representations based on the presence of negation may represent an essential improvement over an n-gram-like convolutional scheme. We might also attribute the lack of success of evidence weighing in the Argus task to a more conservative scheme of passage retrieval employed in the IR pipeline that produced the dataset. Given the large vocabulary and noise levels in the data, we may also simply require more data to train the evidence weighing properly.
We see from the training vs. test accuracies that RNN-based models (including the word-level attention model) have a strong tendency to overfit on our small datasets, while CNN is much more resilient. While word-level attention seems appealing for such a task, we speculate that we simply might not have enough training data to properly train it. Investigating attention transfer is a point for future work — by our preliminary experiments on multiple datasets, attention models appear more task specific than the basic text comprehension models of memory based RNNs.
One concrete limitation of our models in case of the Argus task is a problem of reconciling particular named entity instances. The more obvious form of this issue is Had Roger Federer beat Martin Cilic in US OPEN 2014? versus an opposite Had Martin Cilic beat Roger Federer in US OPEN 2014? — another form of this problem is reconciling a hypothesis like Will the Royals win the World Series? with evidence Giants Win World Series With Game 7 Victory Over Royals. An abstract embedding of the sentence will not carry over the required information — it is important to explicitly pass and reconcile the roles of multiple named entities which cannot be meaningfully embedded in a GloVe-like semantic vector space.
Conclusion
We have established a general Hypothesis Evaluation task with three datasets of various properties, and shown that neural models can exhibit strong performance (with less hand-crafting effort than non-neural classifiers). We propose an evidence weighing model that is never harmful and improves performance on some tasks. We also demonstrate that simple models can outperform or closely match performance of complex architectures; all the models we consider are task-independent and were successfully used in different contexts than Hypothesis Evaluation BIBREF6 . Our results empirically show that a basic RNN text comprehension model well trained on a large dataset (even if the task is unrelated and vocabulary characteristics are very different) outperforms or matches more complex architectures trained only on the dataset of the task at hand.
Finally, on the MCTest dataset, our best proposed model is better or statistically indistinguishable from the best neural model reported so far BIBREF17 , even though it has a simpler architecture and only a naive attention mechanism.
We would like to draw several recommendations for future research from our findings: (A) encourage usage of basic neural architectures as evaluation baselines; (B) suggest that future research includes models pretrained on large data as baselines; (C) validate complex architectures on tasks with large datasets if they cannot beat baselines on small datasets; and (D) for randomized machine comprehension models (e.g. neural networks with random weight initialization, batch shuffling or probabilistic dropout), report expected test set performance based on multiple independent training runs.
As a general advice for solving complex tasks with small datasets, besides the point (B) above our analysis suggests convolutional networks as the best models regarding the tendency to overfit, unless semantic composionality plays a crucial role in the task; in this scenario, simple averaging-based models are a great start as well. Preinitializing a model also helps against overfitting.
We release our implementation of the Argus task, evidence integration models and processing of all the evaluated datasets as open source.
We believe the next step towards machine comprehension NLP models (based on deep learning but capable of dealing with real-world, large-vocabulary data) will involve research into a better way to deal with entities without available embeddings. When distinguishing specific entities, simple word-level attention mechanisms will not do. A promising approach could extend the flexibility of the final sentence representation, moving from attention mechanism to a memory mechanism by allowing the network to remember a set of “facts” derived from each sentence; related work has been done for example on end-to-end differentiable shift-reduce parsers with LSTM as stack cells BIBREF28 .
Acknowledgments
This work was co-funded by the Augur Project of the Forecast Foundation and financially supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS16/ 084/OHK3/1T/13. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures.”
We'd like to thank Peronet Despeignes of the Augur Project for his support. Carl Burke has provided instructions for searching CK-12 ebooks within the Kaggle challenge. | Unanswerable |
62613aca3d7c7d534c9f6d8cb91ff55626bb8695 | 62613aca3d7c7d534c9f6d8cb91ff55626bb8695_0 | Q: what datasets did they use?
Text: Introduction
Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.
A typical approach, used implicitly in information retrieval (and its extensions, like IR-based Question Answering systems BIBREF0 ), is to determine evidence relevancy by a keyword overlap feature (like tf-idf or BM-25 BIBREF1 ) and prune the evidence by the relevancy score. On the other hand, textual entailment systems that seek to confirm hypotheses based on evidence BIBREF2 BIBREF3 BIBREF4 are typically provided with only a single piece of evidence or only evidence pre-determined as relevant, and are often restricted to short and simple sentences without open-domain named entity occurences. In this work, we seek to fuse information retrieval and textual entaiment recognition by defining the Hypothesis Evaluation task as deciding the truth value of a hypothesis by integrating numerous pieces of evidence, not all of it equally relevant.
As a specific instance, we introduce the Argus Yes/No Question Answering task. The problem is, given a real-world event binary question like Did Donald Trump announce he is running for president? and numerous retrieved news article fragments as evidence, to determine the answer for the question. Our research is motivated by the Argus automatic reporting system for the Augur prediction market platform. BIBREF5 Therefore, we consider the question answering task within the constraints of a practical scenario that has limited available dataset and only minimum supervision. Hence, authentic news sentences are the evidence (with noise like segmentation errors, irrelevant participial phrases, etc.), and whereas we have gold standard for the correct answers, the model must do without explicit supervision on which individual evidence snippets are relevant and what do they entail.
To this end, we introduce an open dataset of questions and newspaper evidence, and a neural model within the Sentence Pair Scoring framework BIBREF6 that (A) learns sentence embeddings for the question and evidence, (B) the embeddings represent both relevance and entailment characteristics as linear classifier inputs, and (C) the model aggregates all available evidence to produce a binary signal as the answer, which is the only training supervision.
We also evaluate our model on a related task that concerns ranking answers of multiple-choice questions given a set of evidencing sentences. We consider the MCTest dataset and the AI2-8grade/CK12 dataset that we introduce below.
The paper is structured as follows. In Sec. SECREF2 , we formally outline the Argus question answering task, describe the question-evidence dataset, and describe the multiple-choice questions task and datasets. In Sec. SECREF3 , we briefly survey the related work on similar problems, whereas in Sec. SECREF4 we propose our neural models for joint learning of sentence relevance and entailment. We present the results in Sec. SECREF5 and conclude with a summary, model usage recommendations and future work directions in Sec. SECREF6 .
The Hypothesis Evaluation Task
Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus.
Argus Dataset
Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.
To train a machine learning model for the INLINEFORM0 function, we have created a dataset of questions with gold labels, and produced sets of evidence texts from a variety of news paper using a pre-existing IR (information retrieval) component of the Argus system. We release this dataset openly.
To pose a reproducible task for the IR component, the time domain of questions was restricted from September 1, 2014 to September 1, 2015, and topic domain was focused to politics, sports and the stock market. To build the question dataset, we have used several sources:
We asked Amazon Mechanical Turk users to pose questions, together with a golden label and a news article reference. This seeded the dataset with initial, somewhat redundant 250 questions.
We manually extended this dataset by derived questions with reversed polarity (to obtain an opposite answer).
We extended the data with questions autogenerated from 26 templates, pertaining top sporting event winners and US senate or gubernatorial elections.
To build the evidence dataset, we used the Syphon preprocessing component BIBREF5 of the Argus implementation to identify semantic roles of all question tokens and produce the search keywords if a role was assigned to each token. We then used the IR component to query a corpus of newspaper articles, and kept sentences that contained at least 2/3 of all the keywords. Our corpus of articles contained articles from The Guardian (all articles) and from the New York Times (Sports, Politics and Business sections). Furthermore, we scraped partial archive.org historical data out of 35 RSS feeds from CNN, Reuters, BBC International, CBS News, ABC News, c|net, Financial Times, Skynews and the Washington Post.
For the final dataset, we kept only questions where at least a single evidence was found (i.e. we successfuly assigned a role to each token, found some news stories and found at least one sentence with 2/3 of question keywords within). The final size of the dataset is outlined in Fig. FIGREF8 and some examples are shown in Fig. FIGREF9 .
AI2-8grade/CK12 Dataset
The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.
We have combined each answer with the respective question (by substituting the wh-word in the question by each answer) and retrieved evidence sentences for each hypothesis using Solr search in a collection of CK-12 “Concepts B” textbooks. 525 questions attained any supporting evidence, examples are shown in Fig. FIGREF10 .
We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.
MCTest Dataset
The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers.
The dataset is split in two parts, MC-160 and MC-500, based on provenance but similar in quality. We train all models on a joined training set.
The practical setting differs from the Argus task as the MCTest dataset contains relatively restricted vocabulary and well-formed sentences. Furthermore, the goal is to find the single key point in the story to focus on, while in the Argus setting we may have many pieces of evidence supporting an answer; another specific characteristics of MCTest is that it consists of stories where the ordering and proximity of evidence sentences matters.
Related Work
Our primary concern when integrating natural language query with textual evidence is to find sentence-level representations suitable both for relevance weighing and answer prediction.
Sentence-level representations in the retrieval + inference context have been popularly proposed within the Memory Network framework BIBREF10 , but explored just in the form of averaged word embeddings; the task includes only very simple sentences and a small vocabulary. Much more realistic setting is introduced in the Answer Sentence Selection context BIBREF11 BIBREF6 , with state-of-art models using complex deep neural architectures with attention BIBREF12 , but the selection task consists of only retrieval and no inference (answer prediction). A more indirect retrieval task regarding news summarization was investigated by BIBREF13 .
In the entailment context, BIBREF4 introduced a large dataset with single-evidence sentence pairs (Stanford Natural Language Inference, SNLI), but a larger vocabulary and slightly more complicated (but still conservatively formed) sentences. They also proposed baseline recurrent neural model for modeling sentence representations, while word-level attention based models are being studied more recently BIBREF14 BIBREF15 .
In the MCTest text comprehension challenge BIBREF8 , the leading models use complex engineered features ensembling multiple traditional semantic NLP approaches BIBREF16 . The best deep model so far BIBREF17 uses convolutional neural networks for sentence representations, and attention on multiple levels to pick evidencing sentences.
Neural Model
Our approach is to use a sequence of word embeddings to build sentence embeddings for each hypothesis and respective evidence, then use the sentence embeddings to estimate relevance and entailment of each evidence with regard to the respective hypothesis, and finally integrate the evidence to a single answer.
Sentence Embeddings
To produce sentence embeddings, we investigated the neural models proposed in the dataset-sts framework for deep learning of sentence pair scoring functions. BIBREF6
We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 .
The original attn1511 model BIBREF6 (as tuned for the Answer Sentence Selection task) used a softmax attention mechanism that would effectively select only a few key words of the evidence to focus on — for a hypothesis-evidence token INLINEFORM0 scalar attention score INLINEFORM1 , the focus INLINEFORM2 is: INLINEFORM3
A different focus mechanism exhibited better performance in the Hypothesis Evaluation task, modelling per-token attention more independently: INLINEFORM0
We also use relu instead of tanh in the CNNs.
As model input, we use the standard GloVe embeddings BIBREF22 extended with binary inputs denoting token type and overlap with token or bigram in the paired sentence, as described in BIBREF6 . However, we introduce two changes to the word embedding model — we use 50-dimensional embeddings instead of 300-dimensional, and rather than building an adaptable embedding matrix from the training set words preinitialized by GloVe, we use only the top 100 most frequent tokens in the adaptable embedding matrix and use fixed GloVe vectors for all other tokens (including tokens not found in the training set). In preliminary experiments, this improved generalization for highly vocabulary-rich tasks like Argus, while still allowing the high-frequency tokens (like interpunction or conjunctions) to learn semantic operator representations.
As an additional method for producing sentence embeddings, we consider the Ubu. RNN transfer learning method proposed by BIBREF6 where an RNN model (as described above) is trained on the Ubuntu Dialogue task BIBREF23 . The pretrained model weights are used to initialize an RNN model which is then fine-tuned on the Hypothesis Evaluation task. We use the same model as originally proposed (except the aforementioned vocabulary handling modification), with the dot-product scoring used for Ubuntu Dialogue training replaced by MLP point-scores described below.
Evidence Integration
Our main proposed schema for evidence integration is Evidence Weighing. From each pair of hypothesis and evidence embeddings, we produce two INLINEFORM0 predictions using a pair of MLP point-scorers of dataset-sts BIBREF6 with sigmoid activation function. The predictions are interpreted as INLINEFORM1 entailment (0 to 1 as no to yes) and relevance INLINEFORM2 . To integrate the predictions across multiple pieces of evidence, we propose a weighed average model: INLINEFORM3
We do not have access to any explicit labels for the evidence, but we train the model end-to-end with just INLINEFORM0 labels and the formula for INLINEFORM1 is differentiable, carrying over the gradient to the sentence embedding model. This can be thought of as a simple passage-wide attention model.
As a baseline strategy, we also consider Evidence Averaging, where we simply produce a single scalar prediction per hypothesis-evidence pair (using the same strategy as above) and decide the hypothesis simply based on the mean prediction across available evidence.
Finally, following success reported in the Answer Sentence Selection task BIBREF6 , we consider a BM25 Feature combined with Evidence Averaging, where the MLP scorer that produces the pair scalar prediction as above takes an additional BM25 word overlap score input BIBREF1 besides the elementwise embedding comparisons.
Experimental Setup
We implement the differentiable model in the Keras framework BIBREF24 and train the whole network from word embeddings to output evidence-integrated hypothesis label using the binary cross-entropy loss as an objective and the Adam optimization algorithm BIBREF25 . We apply INLINEFORM0 regularization and a INLINEFORM1 dropout.
Following the recommendation of BIBREF6 , we report expected test set question accuracy as determined by average accuracy in 16 independent trainings and with 95% confidence intervals based on the Student's t-distribution.
Evaluation
In Fig. FIGREF26 , we report the model performance on the Argus task, showing that the Ubuntu Dialogue transfer RNN outperforms other proposed models by a large margin. However, a comparison of evidence integration approaches in Fig. FIGREF27 shows that evidence integration is not the major deciding factor and there are no staticially meaningful differences between the evaluated approaches. We measured high correlation between classification and relevance scores with Pearson's INLINEFORM0 , showing that our model does not learn a separate evidence weighing function on this task.
In Fig. FIGREF28 , we look at the model performance on the AI2-8grade/CK12 task, repeating the story of Ubuntu Dialogue transfer RNN dominating other models. However, on this task our proposed evidence weighing scheme improves over simpler approaches — but just on the best model, as shown in Fig. FIGREF29 . On the other hand, the simplest averaging model benefits from at least BM25 information to select relevant evidence, apparently.
For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized).
As expected, our models did badly on the multiple-evidence class of questions — we made no attempt to model information flow across adjacent sentences in our models as this aspect is unique to MCTest in the context of our work.
Interestingly, evidence weighing does play an important role on the MCTest task as shown in Fig. FIGREF31 , significantly boosting model accuracy. This confirms that a mechanism to allocate attention to different sentences is indeed crucial for this task.
Analysis
While we can universally proclaim Ubu. RNN as the best model, we observe many aspects of the Hypothesis Evaluation problem that are shared by the AI2-8grade/CK12 and MCTest tasks, but not by the Argus task.
Our largest surprise lies in the ineffectivity of evidence weighing on the Argus task, since observations of irrelevant passages initially led us to investigate this model. We may also see that non-pretrained RNN does very well on the Argus task while CNN is a better model otherwise.
An aspect that could explain this rift is that the latter two tasks are primarily retrieval based, where we seek to judge each evidence as irrelevant or essentially a paraphrase of the hypothesis. On the other hand, the Argus task is highly semantic and compositional, with the questions often differing just by a presence of negation — recurrent model that can capture long-term dependencies and alter sentence representations based on the presence of negation may represent an essential improvement over an n-gram-like convolutional scheme. We might also attribute the lack of success of evidence weighing in the Argus task to a more conservative scheme of passage retrieval employed in the IR pipeline that produced the dataset. Given the large vocabulary and noise levels in the data, we may also simply require more data to train the evidence weighing properly.
We see from the training vs. test accuracies that RNN-based models (including the word-level attention model) have a strong tendency to overfit on our small datasets, while CNN is much more resilient. While word-level attention seems appealing for such a task, we speculate that we simply might not have enough training data to properly train it. Investigating attention transfer is a point for future work — by our preliminary experiments on multiple datasets, attention models appear more task specific than the basic text comprehension models of memory based RNNs.
One concrete limitation of our models in case of the Argus task is a problem of reconciling particular named entity instances. The more obvious form of this issue is Had Roger Federer beat Martin Cilic in US OPEN 2014? versus an opposite Had Martin Cilic beat Roger Federer in US OPEN 2014? — another form of this problem is reconciling a hypothesis like Will the Royals win the World Series? with evidence Giants Win World Series With Game 7 Victory Over Royals. An abstract embedding of the sentence will not carry over the required information — it is important to explicitly pass and reconcile the roles of multiple named entities which cannot be meaningfully embedded in a GloVe-like semantic vector space.
Conclusion
We have established a general Hypothesis Evaluation task with three datasets of various properties, and shown that neural models can exhibit strong performance (with less hand-crafting effort than non-neural classifiers). We propose an evidence weighing model that is never harmful and improves performance on some tasks. We also demonstrate that simple models can outperform or closely match performance of complex architectures; all the models we consider are task-independent and were successfully used in different contexts than Hypothesis Evaluation BIBREF6 . Our results empirically show that a basic RNN text comprehension model well trained on a large dataset (even if the task is unrelated and vocabulary characteristics are very different) outperforms or matches more complex architectures trained only on the dataset of the task at hand.
Finally, on the MCTest dataset, our best proposed model is better or statistically indistinguishable from the best neural model reported so far BIBREF17 , even though it has a simpler architecture and only a naive attention mechanism.
We would like to draw several recommendations for future research from our findings: (A) encourage usage of basic neural architectures as evaluation baselines; (B) suggest that future research includes models pretrained on large data as baselines; (C) validate complex architectures on tasks with large datasets if they cannot beat baselines on small datasets; and (D) for randomized machine comprehension models (e.g. neural networks with random weight initialization, batch shuffling or probabilistic dropout), report expected test set performance based on multiple independent training runs.
As a general advice for solving complex tasks with small datasets, besides the point (B) above our analysis suggests convolutional networks as the best models regarding the tendency to overfit, unless semantic composionality plays a crucial role in the task; in this scenario, simple averaging-based models are a great start as well. Preinitializing a model also helps against overfitting.
We release our implementation of the Argus task, evidence integration models and processing of all the evaluated datasets as open source.
We believe the next step towards machine comprehension NLP models (based on deep learning but capable of dealing with real-world, large-vocabulary data) will involve research into a better way to deal with entities without available embeddings. When distinguishing specific entities, simple word-level attention mechanisms will not do. A promising approach could extend the flexibility of the final sentence representation, moving from attention mechanism to a memory mechanism by allowing the network to remember a set of “facts” derived from each sentence; related work has been done for example on end-to-end differentiable shift-reduce parsers with LSTM as stack cells BIBREF28 .
Acknowledgments
This work was co-funded by the Augur Project of the Forecast Foundation and financially supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS16/ 084/OHK3/1T/13. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures.”
We'd like to thank Peronet Despeignes of the Augur Project for his support. Carl Burke has provided instructions for searching CK-12 ebooks within the Kaggle challenge. | Argus Dataset, AI2-8grade/CK12 Dataset, MCTest Dataset |
6e4505609a280acc45b0a821755afb1b3b518ffd | 6e4505609a280acc45b0a821755afb1b3b518ffd_0 | Q: What evaluation metric is used?
Text: Introduction
In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.
However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.
To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.
The main contributions are summarized as follows:
We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.
We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.
MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.
MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs.
MUSE: Parallel Multi-Scale Attention
Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token.
The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.
The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:
where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:
MUSE: Parallel Multi-Scale Attention ::: Attention Mechanism for Global Context Representation
Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:
Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs:
Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here.
MUSE: Parallel Multi-Scale Attention ::: Convolution for Local Context Modeling
We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.
Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:
where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:
where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.
Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.
Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells.
MUSE: Parallel Multi-Scale Attention ::: Point-wise Feed-forward Network for Capturing Token Representations
To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.
where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters.
Experiment
We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis.
Experiment ::: Datasets
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
Experiment ::: Experimental Settings ::: Model
For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.
MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.
The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers.
Experiment ::: Experimental Settings ::: Training
The training hyper-parameters are tuned on the validation set.
MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.
MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference.
Experiment ::: Experimental Settings ::: Evaluation
During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation.
Experiment ::: Results
As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.
Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.
Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.
MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.
It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets.
Experiment ::: How do we propose effective parallel multi-scale attention?
In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.
Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.
Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).
How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer.
Experiment ::: Further Analysis ::: Parallel multi-scale attention brings time efficiency on GPUs
The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.
In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.
Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.
Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.
MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it.
Related Work
Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.
There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning.
Conclusion and Future work
Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.
To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.
Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.
For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech.
Conclusion and Future work ::: Acknowledgments
This work was supported in part by National Natural Science Foundation of China (No. 61673028). | The BLEU metric |
9bd938859a8b063903314a79f09409af8801c973 | 9bd938859a8b063903314a79f09409af8801c973_0 | Q: What datasets are used?
Text: Introduction
In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.
However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.
To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.
The main contributions are summarized as follows:
We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.
We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.
MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.
MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs.
MUSE: Parallel Multi-Scale Attention
Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token.
The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.
The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:
where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:
MUSE: Parallel Multi-Scale Attention ::: Attention Mechanism for Global Context Representation
Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:
Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs:
Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here.
MUSE: Parallel Multi-Scale Attention ::: Convolution for Local Context Modeling
We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.
Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:
where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:
where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.
Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.
Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells.
MUSE: Parallel Multi-Scale Attention ::: Point-wise Feed-forward Network for Capturing Token Representations
To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.
where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters.
Experiment
We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis.
Experiment ::: Datasets
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
Experiment ::: Experimental Settings ::: Model
For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.
MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.
The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers.
Experiment ::: Experimental Settings ::: Training
The training hyper-parameters are tuned on the validation set.
MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.
MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference.
Experiment ::: Experimental Settings ::: Evaluation
During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation.
Experiment ::: Results
As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.
Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.
Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.
MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.
It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets.
Experiment ::: How do we propose effective parallel multi-scale attention?
In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.
Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.
Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).
How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer.
Experiment ::: Further Analysis ::: Parallel multi-scale attention brings time efficiency on GPUs
The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.
In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.
Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.
Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.
MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it.
Related Work
Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.
There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning.
Conclusion and Future work
Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.
To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.
Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.
For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech.
Conclusion and Future work ::: Acknowledgments
This work was supported in part by National Natural Science Foundation of China (No. 61673028). | WMT14 En-Fr and En-De datasets, IWSLT De-En and En-Vi datasets |
68ba5bf18f351e8c83fae7b444cc50bef7437f13 | 68ba5bf18f351e8c83fae7b444cc50bef7437f13_0 | Q: What are three main machine translation tasks?
Text: Introduction
In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.
However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.
To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.
The main contributions are summarized as follows:
We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.
We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.
MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.
MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs.
MUSE: Parallel Multi-Scale Attention
Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token.
The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.
The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:
where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:
MUSE: Parallel Multi-Scale Attention ::: Attention Mechanism for Global Context Representation
Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:
Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs:
Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here.
MUSE: Parallel Multi-Scale Attention ::: Convolution for Local Context Modeling
We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.
Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:
where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:
where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.
Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.
Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells.
MUSE: Parallel Multi-Scale Attention ::: Point-wise Feed-forward Network for Capturing Token Representations
To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.
where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters.
Experiment
We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis.
Experiment ::: Datasets
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
Experiment ::: Experimental Settings ::: Model
For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.
MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.
The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers.
Experiment ::: Experimental Settings ::: Training
The training hyper-parameters are tuned on the validation set.
MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.
MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference.
Experiment ::: Experimental Settings ::: Evaluation
During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation.
Experiment ::: Results
As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.
Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.
Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.
MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.
It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets.
Experiment ::: How do we propose effective parallel multi-scale attention?
In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.
Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.
Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).
How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer.
Experiment ::: Further Analysis ::: Parallel multi-scale attention brings time efficiency on GPUs
The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.
In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.
Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.
Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.
MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it.
Related Work
Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.
There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning.
Conclusion and Future work
Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.
To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.
Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.
For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech.
Conclusion and Future work ::: Acknowledgments
This work was supported in part by National Natural Science Foundation of China (No. 61673028). | De-En, En-Fr and En-Vi translation tasks |
f6a1125c5621a2f32c9bcdd188dff14efa096083 | f6a1125c5621a2f32c9bcdd188dff14efa096083_0 | Q: How big is improvement in performance over Transformers?
Text: Introduction
In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.
However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.
To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.
The main contributions are summarized as follows:
We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.
We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.
MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.
MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs.
MUSE: Parallel Multi-Scale Attention
Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token.
The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.
The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:
where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:
MUSE: Parallel Multi-Scale Attention ::: Attention Mechanism for Global Context Representation
Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:
Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs:
Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here.
MUSE: Parallel Multi-Scale Attention ::: Convolution for Local Context Modeling
We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.
Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:
where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:
where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.
Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.
Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells.
MUSE: Parallel Multi-Scale Attention ::: Point-wise Feed-forward Network for Capturing Token Representations
To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.
where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters.
Experiment
We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis.
Experiment ::: Datasets
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
Experiment ::: Experimental Settings ::: Model
For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.
MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.
The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers.
Experiment ::: Experimental Settings ::: Training
The training hyper-parameters are tuned on the validation set.
MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.
MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference.
Experiment ::: Experimental Settings ::: Evaluation
During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation.
Experiment ::: Results
As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.
Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.
Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.
MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.
It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets.
Experiment ::: How do we propose effective parallel multi-scale attention?
In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.
Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.
Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).
How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer.
Experiment ::: Further Analysis ::: Parallel multi-scale attention brings time efficiency on GPUs
The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.
In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.
Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.
Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.
MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it.
Related Work
Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.
There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning.
Conclusion and Future work
Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.
To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.
Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.
For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech.
Conclusion and Future work ::: Acknowledgments
This work was supported in part by National Natural Science Foundation of China (No. 61673028). | 2.2 BLEU gains |
282aa4e160abfa7569de7d99b8d45cabee486ba4 | 282aa4e160abfa7569de7d99b8d45cabee486ba4_0 | Q: How do they determine the opinion summary?
Text: Introduction
Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .
Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , Long Short-Term Memory Networks (LSTMs) BIBREF6 and classification models such as Support Vector Machine (SVM) BIBREF7 have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “the restaurant was packed at first, so we waited for 20 minutes”, which should not be extracted because the sentence does not convey any opinion on it.
There are a few works that consider opinion terms when tackling the ATE task. BIBREF8 proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA BIBREF9 models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation BIBREF10 and image captioning BIBREF11 .
To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary, and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “the restaurant is cute but not upscale” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant' and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant' and “upscale”, so that the produced opinion summary involves less noise.
Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section SECREF4 ), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, “Apple is unmatched in product quality, aesthetics, craftmanship, and customer service” has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.
Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods.
The ATE Task
Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:
INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as BIBREF12 , the gold standard opinions are usually not annotated.
Model Description
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
As Recurrent Neural Networks can record the sequential information BIBREF13 , we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. For simplicity, let INLINEFORM0 denote an LSTM unit where INLINEFORM1 is the task indicator. In the following sections, without specification, the symbols with superscript INLINEFORM2 and INLINEFORM3 are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations INLINEFORM4 ( INLINEFORM5 is the dimension of hidden states): DISPLAYFORM0
In principle, RNN can memorize the entire history of the predictions BIBREF13 , but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model's error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.
We propose a Truncated History-Attention (THA) component (the THA block in Figure FIGREF3 ) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent INLINEFORM0 hidden states. At the current prediction time step INLINEFORM1 , THA calculates the normalized importance score INLINEFORM2 of each cached state INLINEFORM3 ( INLINEFORM4 ) as follows: DISPLAYFORM0
DISPLAYFORM0
INLINEFORM0 denotes the previous history-aware aspect representation (refer to Eq. EQREF12 ). INLINEFORM1 can be learned during training. INLINEFORM2 are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history INLINEFORM3 is obtained as follows: DISPLAYFORM0
To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block BIBREF14 , which is shown to be useful in refining word-level features in Machine Translation BIBREF15 and Part-Of-Speech tagging BIBREF16 , to calculate the history-aware aspect representations INLINEFORM0 at the time step INLINEFORM1 : DISPLAYFORM0
where ReLU is the relu activation function.
Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information BIBREF9 , BIBREF17 . MIN BIBREF17 focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA BIBREF9 can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in “Furthermore, while the fish is unquestionably fresh, rolls tend to be inexplicably bland.” as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in BIBREF8 , but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in BIBREF17 .
To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the STN block in Figure FIGREF3 ), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation INLINEFORM0 given the current aspect feature INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters for history-aware aspect representations and opinion representations respectively. They map INLINEFORM2 and INLINEFORM3 to the same subspace. Here the aspect feature INLINEFORM4 acts as a “filter” to keep more important opinion features. Equation EQREF14 also introduces a residual block to obtain a better opinion representation INLINEFORM5 , which is conditioned on the current aspect feature INLINEFORM6 .
For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between INLINEFORM0 and each INLINEFORM1 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters of the Bi-Linear Attention layer. The improved opinion summary INLINEFORM2 at the time INLINEFORM3 is obtained via the weighted sum of the opinion representations: DISPLAYFORM0
Finally, we concatenate the opinion summary INLINEFORM0 and the history-aware aspect representation INLINEFORM1 and feed it into the top-most fully-connected (FC) layer for aspect prediction: DISPLAYFORM0 DISPLAYFORM1
Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations INLINEFORM0 as the features for opinion prediction: DISPLAYFORM0
INLINEFORM0 and INLINEFORM1 are parameters of the FC layers.
Joint Training
All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0
Then, the losses from both tasks are combined to form the training objective of the entire model: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent the loss functions for aspect and opinion extractions respectively.
Datasets
To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.
Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA to provide the distant supervision BIBREF19 . To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table TABREF24 .
Comparisons
We compare our framework with the following methods:
CRF-1: Conditional Random Fields with basic feature templates.
CRF-2: Conditional Random Fields with basic feature templates and word embeddings.
Semi-CRF: First-order Semi-Markov Conditional Random Fields BIBREF20 and the feature templates in BIBREF21 are adopted.
LSTM: Vanilla bi-directional LSTM with pre-trained word embeddings.
IHS_RD BIBREF2 , DLIREC BIBREF3 , EliXa BIBREF22 , NLANGP BIBREF4 : The winning systems in the ATE subtask in SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 .
WDEmb BIBREF5 : Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.
MIN BIBREF17 : MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.
RNCRF BIBREF8 : CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.
CMLA BIBREF9 : CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.
To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision.
Settings
We pre-processed each dataset by lowercasing all words and replace all punctuations with PUNCT. We use pre-trained GloVe 840B vectors BIBREF23 to initialize the word embeddings and the dimension (i.e., INLINEFORM0 ) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution INLINEFORM1 as done in BIBREF24 . All of the weight matrices except those in LSTMs are initialized from the uniform distribution INLINEFORM2 . For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy BIBREF25 . Besides, all biases are initialized as 0's.
The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of INLINEFORM0 , other hyper-parameters are set as follows: INLINEFORM1 , INLINEFORM2 ; the number of cached historical aspect representations INLINEFORM3 is 5; the learning rate of SGD is 0.07.
Main Results
As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.
Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts.
CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.
CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table TABREF39 , CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than CRF-2. Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance.
Ablation Study
To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed.
Attention Visualization and Case Study
In Figure FIGREF41 , we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures FIGREF41 and FIGREF41 show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states INLINEFORM0 's, similar to what CMLA does. As shown in Figure FIGREF41 , it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.
Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table TABREF43 . The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail.
Related Work
Some initial works BIBREF26 developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. BIBREF27 and BIBREF28 performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, BIBREF29 , BIBREF30 employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems BIBREF2 , BIBREF22 , BIBREF4 of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.
Recently, neural network based models, such as LSTM-based BIBREF6 and CNN-based BIBREF31 methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. BIBREF8 performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA BIBREF9 consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN BIBREF17 employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features.
Concluding Discussions
For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper. | the weighted sum of the new opinion representations, according to their associations with the current aspect representation |
ecfb2e75eb9a8eba8f640a039484874fa0d2fceb | ecfb2e75eb9a8eba8f640a039484874fa0d2fceb_0 | Q: Do they explore how useful is the detection history and opinion summary?
Text: Introduction
Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .
Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , Long Short-Term Memory Networks (LSTMs) BIBREF6 and classification models such as Support Vector Machine (SVM) BIBREF7 have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “the restaurant was packed at first, so we waited for 20 minutes”, which should not be extracted because the sentence does not convey any opinion on it.
There are a few works that consider opinion terms when tackling the ATE task. BIBREF8 proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA BIBREF9 models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation BIBREF10 and image captioning BIBREF11 .
To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary, and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “the restaurant is cute but not upscale” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant' and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant' and “upscale”, so that the produced opinion summary involves less noise.
Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section SECREF4 ), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, “Apple is unmatched in product quality, aesthetics, craftmanship, and customer service” has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.
Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods.
The ATE Task
Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:
INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as BIBREF12 , the gold standard opinions are usually not annotated.
Model Description
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
As Recurrent Neural Networks can record the sequential information BIBREF13 , we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. For simplicity, let INLINEFORM0 denote an LSTM unit where INLINEFORM1 is the task indicator. In the following sections, without specification, the symbols with superscript INLINEFORM2 and INLINEFORM3 are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations INLINEFORM4 ( INLINEFORM5 is the dimension of hidden states): DISPLAYFORM0
In principle, RNN can memorize the entire history of the predictions BIBREF13 , but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model's error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.
We propose a Truncated History-Attention (THA) component (the THA block in Figure FIGREF3 ) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent INLINEFORM0 hidden states. At the current prediction time step INLINEFORM1 , THA calculates the normalized importance score INLINEFORM2 of each cached state INLINEFORM3 ( INLINEFORM4 ) as follows: DISPLAYFORM0
DISPLAYFORM0
INLINEFORM0 denotes the previous history-aware aspect representation (refer to Eq. EQREF12 ). INLINEFORM1 can be learned during training. INLINEFORM2 are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history INLINEFORM3 is obtained as follows: DISPLAYFORM0
To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block BIBREF14 , which is shown to be useful in refining word-level features in Machine Translation BIBREF15 and Part-Of-Speech tagging BIBREF16 , to calculate the history-aware aspect representations INLINEFORM0 at the time step INLINEFORM1 : DISPLAYFORM0
where ReLU is the relu activation function.
Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information BIBREF9 , BIBREF17 . MIN BIBREF17 focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA BIBREF9 can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in “Furthermore, while the fish is unquestionably fresh, rolls tend to be inexplicably bland.” as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in BIBREF8 , but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in BIBREF17 .
To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the STN block in Figure FIGREF3 ), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation INLINEFORM0 given the current aspect feature INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters for history-aware aspect representations and opinion representations respectively. They map INLINEFORM2 and INLINEFORM3 to the same subspace. Here the aspect feature INLINEFORM4 acts as a “filter” to keep more important opinion features. Equation EQREF14 also introduces a residual block to obtain a better opinion representation INLINEFORM5 , which is conditioned on the current aspect feature INLINEFORM6 .
For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between INLINEFORM0 and each INLINEFORM1 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters of the Bi-Linear Attention layer. The improved opinion summary INLINEFORM2 at the time INLINEFORM3 is obtained via the weighted sum of the opinion representations: DISPLAYFORM0
Finally, we concatenate the opinion summary INLINEFORM0 and the history-aware aspect representation INLINEFORM1 and feed it into the top-most fully-connected (FC) layer for aspect prediction: DISPLAYFORM0 DISPLAYFORM1
Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations INLINEFORM0 as the features for opinion prediction: DISPLAYFORM0
INLINEFORM0 and INLINEFORM1 are parameters of the FC layers.
Joint Training
All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0
Then, the losses from both tasks are combined to form the training objective of the entire model: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent the loss functions for aspect and opinion extractions respectively.
Datasets
To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.
Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA to provide the distant supervision BIBREF19 . To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table TABREF24 .
Comparisons
We compare our framework with the following methods:
CRF-1: Conditional Random Fields with basic feature templates.
CRF-2: Conditional Random Fields with basic feature templates and word embeddings.
Semi-CRF: First-order Semi-Markov Conditional Random Fields BIBREF20 and the feature templates in BIBREF21 are adopted.
LSTM: Vanilla bi-directional LSTM with pre-trained word embeddings.
IHS_RD BIBREF2 , DLIREC BIBREF3 , EliXa BIBREF22 , NLANGP BIBREF4 : The winning systems in the ATE subtask in SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 .
WDEmb BIBREF5 : Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.
MIN BIBREF17 : MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.
RNCRF BIBREF8 : CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.
CMLA BIBREF9 : CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.
To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision.
Settings
We pre-processed each dataset by lowercasing all words and replace all punctuations with PUNCT. We use pre-trained GloVe 840B vectors BIBREF23 to initialize the word embeddings and the dimension (i.e., INLINEFORM0 ) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution INLINEFORM1 as done in BIBREF24 . All of the weight matrices except those in LSTMs are initialized from the uniform distribution INLINEFORM2 . For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy BIBREF25 . Besides, all biases are initialized as 0's.
The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of INLINEFORM0 , other hyper-parameters are set as follows: INLINEFORM1 , INLINEFORM2 ; the number of cached historical aspect representations INLINEFORM3 is 5; the learning rate of SGD is 0.07.
Main Results
As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.
Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts.
CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.
CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table TABREF39 , CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than CRF-2. Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance.
Ablation Study
To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed.
Attention Visualization and Case Study
In Figure FIGREF41 , we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures FIGREF41 and FIGREF41 show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states INLINEFORM0 's, similar to what CMLA does. As shown in Figure FIGREF41 , it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.
Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table TABREF43 . The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail.
Related Work
Some initial works BIBREF26 developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. BIBREF27 and BIBREF28 performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, BIBREF29 , BIBREF30 employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems BIBREF2 , BIBREF22 , BIBREF4 of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.
Recently, neural network based models, such as LSTM-based BIBREF6 and CNN-based BIBREF31 methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. BIBREF8 performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA BIBREF9 consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN BIBREF17 employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features.
Concluding Discussions
For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper. | Yes |
a6950c22c7919f86b16384facc97f2cf66e5941d | a6950c22c7919f86b16384facc97f2cf66e5941d_0 | Q: Which dataset(s) do they use to train the model?
Text: Introduction
Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .
Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , Long Short-Term Memory Networks (LSTMs) BIBREF6 and classification models such as Support Vector Machine (SVM) BIBREF7 have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “the restaurant was packed at first, so we waited for 20 minutes”, which should not be extracted because the sentence does not convey any opinion on it.
There are a few works that consider opinion terms when tackling the ATE task. BIBREF8 proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA BIBREF9 models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation BIBREF10 and image captioning BIBREF11 .
To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary, and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “the restaurant is cute but not upscale” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant' and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant' and “upscale”, so that the produced opinion summary involves less noise.
Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section SECREF4 ), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, “Apple is unmatched in product quality, aesthetics, craftmanship, and customer service” has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.
Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods.
The ATE Task
Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:
INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as BIBREF12 , the gold standard opinions are usually not annotated.
Model Description
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
As Recurrent Neural Networks can record the sequential information BIBREF13 , we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. For simplicity, let INLINEFORM0 denote an LSTM unit where INLINEFORM1 is the task indicator. In the following sections, without specification, the symbols with superscript INLINEFORM2 and INLINEFORM3 are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations INLINEFORM4 ( INLINEFORM5 is the dimension of hidden states): DISPLAYFORM0
In principle, RNN can memorize the entire history of the predictions BIBREF13 , but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model's error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.
We propose a Truncated History-Attention (THA) component (the THA block in Figure FIGREF3 ) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent INLINEFORM0 hidden states. At the current prediction time step INLINEFORM1 , THA calculates the normalized importance score INLINEFORM2 of each cached state INLINEFORM3 ( INLINEFORM4 ) as follows: DISPLAYFORM0
DISPLAYFORM0
INLINEFORM0 denotes the previous history-aware aspect representation (refer to Eq. EQREF12 ). INLINEFORM1 can be learned during training. INLINEFORM2 are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history INLINEFORM3 is obtained as follows: DISPLAYFORM0
To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block BIBREF14 , which is shown to be useful in refining word-level features in Machine Translation BIBREF15 and Part-Of-Speech tagging BIBREF16 , to calculate the history-aware aspect representations INLINEFORM0 at the time step INLINEFORM1 : DISPLAYFORM0
where ReLU is the relu activation function.
Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information BIBREF9 , BIBREF17 . MIN BIBREF17 focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA BIBREF9 can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in “Furthermore, while the fish is unquestionably fresh, rolls tend to be inexplicably bland.” as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in BIBREF8 , but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in BIBREF17 .
To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the STN block in Figure FIGREF3 ), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation INLINEFORM0 given the current aspect feature INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters for history-aware aspect representations and opinion representations respectively. They map INLINEFORM2 and INLINEFORM3 to the same subspace. Here the aspect feature INLINEFORM4 acts as a “filter” to keep more important opinion features. Equation EQREF14 also introduces a residual block to obtain a better opinion representation INLINEFORM5 , which is conditioned on the current aspect feature INLINEFORM6 .
For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between INLINEFORM0 and each INLINEFORM1 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters of the Bi-Linear Attention layer. The improved opinion summary INLINEFORM2 at the time INLINEFORM3 is obtained via the weighted sum of the opinion representations: DISPLAYFORM0
Finally, we concatenate the opinion summary INLINEFORM0 and the history-aware aspect representation INLINEFORM1 and feed it into the top-most fully-connected (FC) layer for aspect prediction: DISPLAYFORM0 DISPLAYFORM1
Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations INLINEFORM0 as the features for opinion prediction: DISPLAYFORM0
INLINEFORM0 and INLINEFORM1 are parameters of the FC layers.
Joint Training
All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0
Then, the losses from both tasks are combined to form the training objective of the entire model: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent the loss functions for aspect and opinion extractions respectively.
Datasets
To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.
Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA to provide the distant supervision BIBREF19 . To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table TABREF24 .
Comparisons
We compare our framework with the following methods:
CRF-1: Conditional Random Fields with basic feature templates.
CRF-2: Conditional Random Fields with basic feature templates and word embeddings.
Semi-CRF: First-order Semi-Markov Conditional Random Fields BIBREF20 and the feature templates in BIBREF21 are adopted.
LSTM: Vanilla bi-directional LSTM with pre-trained word embeddings.
IHS_RD BIBREF2 , DLIREC BIBREF3 , EliXa BIBREF22 , NLANGP BIBREF4 : The winning systems in the ATE subtask in SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 .
WDEmb BIBREF5 : Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.
MIN BIBREF17 : MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.
RNCRF BIBREF8 : CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.
CMLA BIBREF9 : CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.
To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision.
Settings
We pre-processed each dataset by lowercasing all words and replace all punctuations with PUNCT. We use pre-trained GloVe 840B vectors BIBREF23 to initialize the word embeddings and the dimension (i.e., INLINEFORM0 ) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution INLINEFORM1 as done in BIBREF24 . All of the weight matrices except those in LSTMs are initialized from the uniform distribution INLINEFORM2 . For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy BIBREF25 . Besides, all biases are initialized as 0's.
The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of INLINEFORM0 , other hyper-parameters are set as follows: INLINEFORM1 , INLINEFORM2 ; the number of cached historical aspect representations INLINEFORM3 is 5; the learning rate of SGD is 0.07.
Main Results
As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.
Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts.
CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.
CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table TABREF39 , CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than CRF-2. Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance.
Ablation Study
To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed.
Attention Visualization and Case Study
In Figure FIGREF41 , we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures FIGREF41 and FIGREF41 show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states INLINEFORM0 's, similar to what CMLA does. As shown in Figure FIGREF41 , it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.
Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table TABREF43 . The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail.
Related Work
Some initial works BIBREF26 developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. BIBREF27 and BIBREF28 performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, BIBREF29 , BIBREF30 employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems BIBREF2 , BIBREF22 , BIBREF4 of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.
Recently, neural network based models, such as LSTM-based BIBREF6 and CNN-based BIBREF31 methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. BIBREF8 performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA BIBREF9 consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN BIBREF17 employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features.
Concluding Discussions
For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper. | INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. |
54be3541cfff6574dba067f1e581444537a417db | 54be3541cfff6574dba067f1e581444537a417db_0 | Q: By how much do they outperform state-of-the-art methods?
Text: Introduction
Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .
Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , Long Short-Term Memory Networks (LSTMs) BIBREF6 and classification models such as Support Vector Machine (SVM) BIBREF7 have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “the restaurant was packed at first, so we waited for 20 minutes”, which should not be extracted because the sentence does not convey any opinion on it.
There are a few works that consider opinion terms when tackling the ATE task. BIBREF8 proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA BIBREF9 models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation BIBREF10 and image captioning BIBREF11 .
To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary, and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “the restaurant is cute but not upscale” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant' and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant' and “upscale”, so that the produced opinion summary involves less noise.
Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section SECREF4 ), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, “Apple is unmatched in product quality, aesthetics, craftmanship, and customer service” has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.
Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods.
The ATE Task
Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:
INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as BIBREF12 , the gold standard opinions are usually not annotated.
Model Description
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
As Recurrent Neural Networks can record the sequential information BIBREF13 , we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. For simplicity, let INLINEFORM0 denote an LSTM unit where INLINEFORM1 is the task indicator. In the following sections, without specification, the symbols with superscript INLINEFORM2 and INLINEFORM3 are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations INLINEFORM4 ( INLINEFORM5 is the dimension of hidden states): DISPLAYFORM0
In principle, RNN can memorize the entire history of the predictions BIBREF13 , but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model's error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.
We propose a Truncated History-Attention (THA) component (the THA block in Figure FIGREF3 ) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent INLINEFORM0 hidden states. At the current prediction time step INLINEFORM1 , THA calculates the normalized importance score INLINEFORM2 of each cached state INLINEFORM3 ( INLINEFORM4 ) as follows: DISPLAYFORM0
DISPLAYFORM0
INLINEFORM0 denotes the previous history-aware aspect representation (refer to Eq. EQREF12 ). INLINEFORM1 can be learned during training. INLINEFORM2 are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history INLINEFORM3 is obtained as follows: DISPLAYFORM0
To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block BIBREF14 , which is shown to be useful in refining word-level features in Machine Translation BIBREF15 and Part-Of-Speech tagging BIBREF16 , to calculate the history-aware aspect representations INLINEFORM0 at the time step INLINEFORM1 : DISPLAYFORM0
where ReLU is the relu activation function.
Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information BIBREF9 , BIBREF17 . MIN BIBREF17 focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA BIBREF9 can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in “Furthermore, while the fish is unquestionably fresh, rolls tend to be inexplicably bland.” as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in BIBREF8 , but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in BIBREF17 .
To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the STN block in Figure FIGREF3 ), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation INLINEFORM0 given the current aspect feature INLINEFORM1 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters for history-aware aspect representations and opinion representations respectively. They map INLINEFORM2 and INLINEFORM3 to the same subspace. Here the aspect feature INLINEFORM4 acts as a “filter” to keep more important opinion features. Equation EQREF14 also introduces a residual block to obtain a better opinion representation INLINEFORM5 , which is conditioned on the current aspect feature INLINEFORM6 .
For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between INLINEFORM0 and each INLINEFORM1 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are parameters of the Bi-Linear Attention layer. The improved opinion summary INLINEFORM2 at the time INLINEFORM3 is obtained via the weighted sum of the opinion representations: DISPLAYFORM0
Finally, we concatenate the opinion summary INLINEFORM0 and the history-aware aspect representation INLINEFORM1 and feed it into the top-most fully-connected (FC) layer for aspect prediction: DISPLAYFORM0 DISPLAYFORM1
Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations INLINEFORM0 as the features for opinion prediction: DISPLAYFORM0
INLINEFORM0 and INLINEFORM1 are parameters of the FC layers.
Joint Training
All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0
Then, the losses from both tasks are combined to form the training objective of the entire model: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 represent the loss functions for aspect and opinion extractions respectively.
Datasets
To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.
Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA to provide the distant supervision BIBREF19 . To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table TABREF24 .
Comparisons
We compare our framework with the following methods:
CRF-1: Conditional Random Fields with basic feature templates.
CRF-2: Conditional Random Fields with basic feature templates and word embeddings.
Semi-CRF: First-order Semi-Markov Conditional Random Fields BIBREF20 and the feature templates in BIBREF21 are adopted.
LSTM: Vanilla bi-directional LSTM with pre-trained word embeddings.
IHS_RD BIBREF2 , DLIREC BIBREF3 , EliXa BIBREF22 , NLANGP BIBREF4 : The winning systems in the ATE subtask in SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 .
WDEmb BIBREF5 : Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.
MIN BIBREF17 : MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.
RNCRF BIBREF8 : CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.
CMLA BIBREF9 : CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.
To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision.
Settings
We pre-processed each dataset by lowercasing all words and replace all punctuations with PUNCT. We use pre-trained GloVe 840B vectors BIBREF23 to initialize the word embeddings and the dimension (i.e., INLINEFORM0 ) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution INLINEFORM1 as done in BIBREF24 . All of the weight matrices except those in LSTMs are initialized from the uniform distribution INLINEFORM2 . For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy BIBREF25 . Besides, all biases are initialized as 0's.
The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of INLINEFORM0 , other hyper-parameters are set as follows: INLINEFORM1 , INLINEFORM2 ; the number of cached historical aspect representations INLINEFORM3 is 5; the learning rate of SGD is 0.07.
Main Results
As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.
Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts.
CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.
CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table TABREF39 , CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than CRF-2. Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance.
Ablation Study
To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed.
Attention Visualization and Case Study
In Figure FIGREF41 , we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures FIGREF41 and FIGREF41 show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states INLINEFORM0 's, similar to what CMLA does. As shown in Figure FIGREF41 , it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.
Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table TABREF43 . The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail.
Related Work
Some initial works BIBREF26 developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. BIBREF27 and BIBREF28 performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, BIBREF29 , BIBREF30 employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems BIBREF2 , BIBREF22 , BIBREF4 of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.
Recently, neural network based models, such as LSTM-based BIBREF6 and CNN-based BIBREF31 methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. BIBREF8 performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA BIBREF9 consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN BIBREF17 employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features.
Concluding Discussions
For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper. | Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively. |
221e9189a9d2431902d8ea833f486a38a76cbd8e | 221e9189a9d2431902d8ea833f486a38a76cbd8e_0 | Q: What is the average number of turns per dialog?
Text: Introduction
Voice-based “personal assistants" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.
Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11.
To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers.
Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments.
Related work ::: Human-machine vs. human-human dialog
BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14.
Related work ::: The Wizard of Oz (WOz) Approach and MultiWOZ
The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.
Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content.
The Taskmaster Corpus ::: Overview
There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research.
Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity.
Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations.
Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts.
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently.
Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13.
The Taskmaster Corpus ::: Two-person, spoken dataset
In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: WOz platform and data pipeline
While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works.
Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style.
Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging.
Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages.
User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue.
Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: Agents, workers and training
Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side.
Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\$1.00$ to $\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset)
While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant).
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Task scenarios and instructions
Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Pros and cons of self-dialogs
The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful.
Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage.
The Taskmaster Corpus ::: Annotation
We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to" and “from" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release.
As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful.
In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept.
Dataset Analysis ::: Self-dialogs vs MultiWOZ
We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset.
Dataset Analysis ::: Self-dialogs vs Two-person
In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together.
Dataset Analysis ::: Baseline Experiments: Response Generation
We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as
$P_{\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model.
n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.
Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.
LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.
Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2.
GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.
We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments.
Dataset Analysis ::: Baseline Experiments: Argument Prediction
Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance.
Conclusion
To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments. | The average number of utterances per dialog is about 23 |
a276d5931b989e0a33f2a0bc581456cca25658d9 | a276d5931b989e0a33f2a0bc581456cca25658d9_0 | Q: What baseline models are offered?
Text: Introduction
Voice-based “personal assistants" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.
Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11.
To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers.
Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments.
Related work ::: Human-machine vs. human-human dialog
BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14.
Related work ::: The Wizard of Oz (WOz) Approach and MultiWOZ
The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.
Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content.
The Taskmaster Corpus ::: Overview
There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research.
Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity.
Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations.
Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts.
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently.
Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13.
The Taskmaster Corpus ::: Two-person, spoken dataset
In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: WOz platform and data pipeline
While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works.
Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style.
Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging.
Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages.
User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue.
Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: Agents, workers and training
Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side.
Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\$1.00$ to $\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset)
While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant).
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Task scenarios and instructions
Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Pros and cons of self-dialogs
The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful.
Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage.
The Taskmaster Corpus ::: Annotation
We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to" and “from" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release.
As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful.
In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept.
Dataset Analysis ::: Self-dialogs vs MultiWOZ
We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset.
Dataset Analysis ::: Self-dialogs vs Two-person
In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together.
Dataset Analysis ::: Baseline Experiments: Response Generation
We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as
$P_{\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model.
n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.
Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.
LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.
Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2.
GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.
We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments.
Dataset Analysis ::: Baseline Experiments: Argument Prediction
Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance.
Conclusion
To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments. | 3-gram and 4-gram conditional language model, Convolution, LSTM models BIBREF27 with and without attention BIBREF28, Transformer, GPT-2 |
c21d26130b521c9596a1edd7b9ef3fe80a499f1e | c21d26130b521c9596a1edd7b9ef3fe80a499f1e_0 | Q: Which six domains are covered in the dataset?
Text: Introduction
Voice-based “personal assistants" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.
Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11.
To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers.
Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments.
Related work ::: Human-machine vs. human-human dialog
BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14.
Related work ::: The Wizard of Oz (WOz) Approach and MultiWOZ
The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.
Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content.
The Taskmaster Corpus ::: Overview
There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research.
Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity.
Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations.
Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts.
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently.
Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13.
The Taskmaster Corpus ::: Two-person, spoken dataset
In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: WOz platform and data pipeline
While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works.
Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style.
Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging.
Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages.
User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue.
Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48.
The Taskmaster Corpus ::: Two-person, spoken dataset ::: Agents, workers and training
Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side.
Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\$1.00$ to $\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset)
While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant).
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Task scenarios and instructions
Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions.
The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Pros and cons of self-dialogs
The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful.
Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage.
The Taskmaster Corpus ::: Annotation
We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to" and “from" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release.
As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful.
In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept.
Dataset Analysis ::: Self-dialogs vs MultiWOZ
We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset.
Dataset Analysis ::: Self-dialogs vs Two-person
In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together.
Dataset Analysis ::: Baseline Experiments: Response Generation
We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as
$P_{\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model.
n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.
Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.
LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.
Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2.
GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.
We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments.
Dataset Analysis ::: Baseline Experiments: Argument Prediction
Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance.
Conclusion
To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments. | ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations |
ec8043290356fcb871c2f5d752a9fe93a94c2f71 | ec8043290356fcb871c2f5d752a9fe93a94c2f71_0 | Q: What other natural processing tasks authors think could be studied by using word embeddings?
Text: Introduction
The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.
In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.
Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.
We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion.
Related works
Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.
In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.
A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.
Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.
While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task.
Material and Methods
To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.
The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.
Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.
The following strategies to create word embedding were considered in this paper:
GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.
Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.
FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.
Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.
To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\alpha _{ij}$, defined as:
where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.
After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.
In summary, the methodology used in this paper encompasses the following steps:
Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.
Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.
Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).
Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.
Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.
The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.
Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information.
Results and Discussion
In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges.
Results and Discussion ::: Performance analysis
In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.
The relative improvement in performance is given by $\Gamma _+{(p)}/\Gamma _0$, where $\Gamma _+{(p)}$ is the accuracy rate obtained when $p\%$ additional edges are included and $\Gamma _0 = \Gamma _+{(p=0)}$, i.e. $\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\lbrace 1.0, 2.5, 5.0, 10.0\rbrace $ thousand words.
The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\%$ and $p=12\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.
The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.
Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.
While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\max \Gamma _+ = \max _p \Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\langle \Gamma _+ - \Gamma _0 \rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \le N_+ \le 20$. Table TABREF17 summarizes the results obtained for $w = \lbrace 1.0, 5.0, 10.0\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.
In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.
The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.
The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\%$.
Results and Discussion ::: Effects of considering stopwords and local thresholding
While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.
The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\Gamma _0 = 37.18\%$ to $38.46\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.
We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\max \Gamma _+^{(L)} / \max \Gamma _+^{(G)}$, where $\Gamma _+^{(L)}$ and $\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.
To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths.
Conclusion
Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.
Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.
Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links.
Acknowledgments
The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Supplementary Information ::: Stopwords
The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once.
Supplementary Information ::: List of books
The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\langle S_L \rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC).
Supplementary Information ::: Additional results
In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis. | general classification tasks, use of the methodology in other networked systems, a network could be enriched with embeddings obtained from graph embeddings techniques |
728c2fb445173fe117154a2a5482079caa42fe24 | 728c2fb445173fe117154a2a5482079caa42fe24_0 | Q: What is the reason that traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text?
Text: Introduction
The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.
In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.
Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.
We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion.
Related works
Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.
In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.
A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.
Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.
While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task.
Material and Methods
To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.
The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.
Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.
The following strategies to create word embedding were considered in this paper:
GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.
Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.
FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.
Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.
To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\alpha _{ij}$, defined as:
where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.
After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.
In summary, the methodology used in this paper encompasses the following steps:
Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.
Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.
Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).
Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.
Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.
The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.
Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information.
Results and Discussion
In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges.
Results and Discussion ::: Performance analysis
In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.
The relative improvement in performance is given by $\Gamma _+{(p)}/\Gamma _0$, where $\Gamma _+{(p)}$ is the accuracy rate obtained when $p\%$ additional edges are included and $\Gamma _0 = \Gamma _+{(p=0)}$, i.e. $\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\lbrace 1.0, 2.5, 5.0, 10.0\rbrace $ thousand words.
The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\%$ and $p=12\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.
The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.
Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.
While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\max \Gamma _+ = \max _p \Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\langle \Gamma _+ - \Gamma _0 \rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \le N_+ \le 20$. Table TABREF17 summarizes the results obtained for $w = \lbrace 1.0, 5.0, 10.0\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.
In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.
The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.
The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\%$.
Results and Discussion ::: Effects of considering stopwords and local thresholding
While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.
The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\Gamma _0 = 37.18\%$ to $38.46\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.
We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\max \Gamma _+^{(L)} / \max \Gamma _+^{(G)}$, where $\Gamma _+^{(L)}$ and $\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.
To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths.
Conclusion
Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.
Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.
Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links.
Acknowledgments
The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Supplementary Information ::: Stopwords
The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once.
Supplementary Information ::: List of books
The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\langle S_L \rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC).
Supplementary Information ::: Additional results
In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis. | long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach |
23d32666dfc29ed124f3aa4109e2527efa225fbc | 23d32666dfc29ed124f3aa4109e2527efa225fbc_0 | Q: Do the use word embeddings alone or they replace some previous features of the model with word embeddings?
Text: Introduction
The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.
In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.
Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.
We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion.
Related works
Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.
In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.
A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.
Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.
While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task.
Material and Methods
To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.
The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.
Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.
The following strategies to create word embedding were considered in this paper:
GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.
Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.
FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.
Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.
To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\alpha _{ij}$, defined as:
where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.
After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.
In summary, the methodology used in this paper encompasses the following steps:
Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.
Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.
Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).
Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.
Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.
The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.
Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information.
Results and Discussion
In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges.
Results and Discussion ::: Performance analysis
In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.
The relative improvement in performance is given by $\Gamma _+{(p)}/\Gamma _0$, where $\Gamma _+{(p)}$ is the accuracy rate obtained when $p\%$ additional edges are included and $\Gamma _0 = \Gamma _+{(p=0)}$, i.e. $\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\lbrace 1.0, 2.5, 5.0, 10.0\rbrace $ thousand words.
The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\%$ and $p=12\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.
The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.
Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.
While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\max \Gamma _+ = \max _p \Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\langle \Gamma _+ - \Gamma _0 \rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \le N_+ \le 20$. Table TABREF17 summarizes the results obtained for $w = \lbrace 1.0, 5.0, 10.0\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.
In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.
The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.
The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\%$.
Results and Discussion ::: Effects of considering stopwords and local thresholding
While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.
The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\Gamma _0 = 37.18\%$ to $38.46\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.
We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\max \Gamma _+^{(L)} / \max \Gamma _+^{(G)}$, where $\Gamma _+^{(L)}$ and $\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.
To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths.
Conclusion
Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.
Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.
Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links.
Acknowledgments
The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Supplementary Information ::: Stopwords
The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once.
Supplementary Information ::: List of books
The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\langle S_L \rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC).
Supplementary Information ::: Additional results
In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis. | They use it as addition to previous model - they add new edge between words if word embeddings are similar. |
076928bebde4dffcb404be216846d9d680310622 | 076928bebde4dffcb404be216846d9d680310622_0 | Q: On what model architectures are previous co-occurence networks based?
Text: Introduction
The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.
In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.
While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.
Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.
We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion.
Related works
Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.
In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.
A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.
Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.
While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task.
Material and Methods
To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.
The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.
Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.
The following strategies to create word embedding were considered in this paper:
GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.
Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.
FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.
Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.
To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\alpha _{ij}$, defined as:
where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.
After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.
In summary, the methodology used in this paper encompasses the following steps:
Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.
Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.
Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).
Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.
Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.
The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.
Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information.
Results and Discussion
In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges.
Results and Discussion ::: Performance analysis
In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.
The relative improvement in performance is given by $\Gamma _+{(p)}/\Gamma _0$, where $\Gamma _+{(p)}$ is the accuracy rate obtained when $p\%$ additional edges are included and $\Gamma _0 = \Gamma _+{(p=0)}$, i.e. $\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\lbrace 1.0, 2.5, 5.0, 10.0\rbrace $ thousand words.
The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\%$ and $p=12\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.
The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.
Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.
While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\max \Gamma _+ = \max _p \Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\langle \Gamma _+ - \Gamma _0 \rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \le N_+ \le 20$. Table TABREF17 summarizes the results obtained for $w = \lbrace 1.0, 5.0, 10.0\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.
In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.
The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.
The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\%$.
Results and Discussion ::: Effects of considering stopwords and local thresholding
While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.
The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\Gamma _0 = 37.18\%$ to $38.46\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.
We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\max \Gamma _+^{(L)} / \max \Gamma _+^{(G)}$, where $\Gamma _+^{(L)}$ and $\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.
To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths.
Conclusion
Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.
Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.
Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links.
Acknowledgments
The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Supplementary Information ::: Stopwords
The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once.
Supplementary Information ::: List of books
The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\langle S_L \rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC).
Supplementary Information ::: Additional results
In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis. | in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window, connects only adjacent words in the so called word adjacency networks |
f33236ebd6f5a9ccb9b9dbf05ac17c3724f93f91 | f33236ebd6f5a9ccb9b9dbf05ac17c3724f93f91_0 | Q: Is model explanation output evaluated, what metric was used?
Text: Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.
At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant).
Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.
For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.
As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.
We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489).
Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.
Including text hypotheses and explanations.
Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.
Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.
For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper.
Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. | balanced accuracy, i.e., the average of the three accuracies on each class |
66bf0d61ffc321f15e7347aaed191223f4ce4b4a | 66bf0d61ffc321f15e7347aaed191223f4ce4b4a_0 | Q: How many annotators are used to write natural language explanations to SNLI-VE-2.0?
Text: Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.
At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant).
Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.
For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.
As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.
We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489).
Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.
Including text hypotheses and explanations.
Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.
Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.
For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper.
Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. | 2,060 workers |
5dfa59c116e0ceb428efd99bab19731aa3df4bbd | 5dfa59c116e0ceb428efd99bab19731aa3df4bbd_0 | Q: How many natural language explanations are human-written?
Text: Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.
At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant).
Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.
For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.
As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.
We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489).
Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.
Including text hypotheses and explanations.
Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.
Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.
For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper.
Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. | Totally 6980 validation and test image-sentence pairs have been corrected. |
0c557b408183630d1c6c325b5fb9ff1573661290 | 0c557b408183630d1c6c325b5fb9ff1573661290_0 | Q: How much is performance difference of existing model between original and corrected corpus?
Text: Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.
At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant).
Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.
For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.
As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.
We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489).
Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.
Including text hypotheses and explanations.
Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.
Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.
For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper.
Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. | 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set |
a08b5018943d4428f067c08077bfff1af3de9703 | a08b5018943d4428f067c08077bfff1af3de9703_0 | Q: What is the class with highest error rate in SNLI-VE?
Text: Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.
At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant).
Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.
For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.
As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.
We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation.
Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489).
Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.
Including text hypotheses and explanations.
Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.
Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.
For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper.
Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. | neutral class |
9447ec36e397853c04dcb8f67492ca9f944dbd4b | 9447ec36e397853c04dcb8f67492ca9f944dbd4b_0 | Q: What is the dataset used as input to the Word2Vec algorithm?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words |
12c6ca435f4fcd4ad5ea5c0d76d6ebb9d0be9177 | 12c6ca435f4fcd4ad5ea5c0d76d6ebb9d0be9177_0 | Q: Are the word embeddings tested on a NLP task?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | Yes |
32c149574edf07b1a96d7f6bc49b95081de1abd2 | 32c149574edf07b1a96d7f6bc49b95081de1abd2_0 | Q: Are the word embeddings evaluated?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | Yes |
3de27c81af3030eb2d9de1df5ec9bfacdef281a9 | 3de27c81af3030eb2d9de1df5ec9bfacdef281a9_0 | Q: How big is dataset used to train Word2Vec for the Italian Language?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | $421\,829\,960$ words divided into $17\,305\,401$ sentences |
cc680cb8f45aeece10823a3f8778cf215ccc8af0 | cc680cb8f45aeece10823a3f8778cf215ccc8af0_0 | Q: How does different parameter settings impact the performance and semantic capacity of resulting model?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others |
fab4ec639a0ea1e07c547cdef1837c774ee1adb8 | fab4ec639a0ea1e07c547cdef1837c774ee1adb8_0 | Q: Are the semantic analysis findings for Italian language similar to English language version?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | Unanswerable |
9190c56006ba84bf41246a32a3981d38adaf422c | 9190c56006ba84bf41246a32a3981d38adaf422c_0 | Q: What dataset is used for training Word2Vec in Italian language?
Text: Introduction
In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.
These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.
Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.
In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples.
Word2Vec
The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).
The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.
Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space).
Word2Vec ::: Sampling rate
The common words (such as “the", “of", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.
In practice, each word is associated with a sort of “keeping probability" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability" to the generic word $w_i$ through the formula:
where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$.
Word2Vec ::: Negative sampling
Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.
The “negative sampling" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.
The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:
Negative samples are then selected by choosing a sort of “unigram distribution", so that the most frequent words are also the most often backpropated ones.
Implementation details
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.
The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$.
Results
To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\,791$ analogies divided into 19 different categories: 6 related to the “semantic" macro-area (8915 analogies) and 13 to the “syntactic" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen"); where $b^{*}$ is the word to be guessed (“queen"), $b$ is the word coupled to it (“king"), $a$ is the word for the components to be eliminated (“man"), and $a^{*}$ is the word for the components to be added (“woman").
The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:
and through the multiplicative cosine distance (3COSMUL) BIBREF12:
where $\epsilon =10^{-6}$ and $\cos (x, y) = \frac{x \cdot y}{\left\Vert x\right\Vert \left\Vert y\right\Vert }$. The extremely low value chosen for the $\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests.
Results ::: Analysis of the various models
We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.
Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.
Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.
Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\%$ of total accuracy (with 3COSMUL metric).
Results ::: Comparison with other models
Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).
As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.
For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both.
Conclusion
In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.
Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area. | extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila) |
7aab78e90ba1336950a2b0534cc0cb214b96b4fd | 7aab78e90ba1336950a2b0534cc0cb214b96b4fd_0 | Q: How are the auxiliary signals from the morphology table incorporated in the decoder?
Text: Introduction
Morphologically complex words (MCWs) are multi-layer structures which consist of different subunits, each of which carries semantic information and has a specific syntactic role. Table 1 gives a Turkish example to show this type of complexity. This example is a clear indication that word-based models are not suitable to process such complex languages. Accordingly, when translating MRLs, it might not be a good idea to treat words as atomic units as it demands a large vocabulary that imposes extra overhead. Since MCWs can appear in various forms we require a very large vocabulary to $i$ ) cover as many morphological forms and words as we can, and $ii$ ) reduce the number of OOVs. Neural models by their nature are complex, and we do not want to make them more complicated by working with large vocabularies. Furthermore, even if we have quite a large vocabulary set, clearly some words would remain uncovered by that. This means that a large vocabulary not only complicates the entire process, but also does not necessarily mitigate the OOV problem. For these reasons we propose an NMT engine which works at the character level.
In this paper, we focus on translating into MRLs and issues associated with word formation on the target side. To provide a better translation we do not necessarily need a large target lexicon, as an MCW can be gradually formed during decoding by means of its subunits, similar to the solution proposed in character-based decoding models BIBREF0 . Generating a complex word character-by-character is a better approach compared to word-level sampling, but it has other disadvantages.
One character can co-occur with another with almost no constraint, but a particular word or morpheme can only collocate with a very limited number of other constituents. Unlike words, characters are not meaning-bearing units and do not preserve syntactic information, so (in the extreme case) the chance of sampling each character by the decoder is almost equal to the others, but this situation is less likely for words. The only constraint that prioritize which character should be sampled is information stored in the decoder, which we believe is insufficient to cope with all ambiguities. Furthermore, when everything is segmented into characters the target sentence with a limited number of words is changed to a very long sequence of characters, which clearly makes it harder for the decoder to remember such a long history. Accordingly, character-based information flows in the decoder may not be as informative as word- or morpheme-based information.
In the character-based NMT model everything is almost the same as its word-based counterpart except the target vocabulary whose size is considerably reduced from thousands of words to just hundreds of characters. If we consider the decoder as a classifier, it should in principle be able to perform much better over hundreds of classes (characters) rather than thousands (words), but the performance of character-based models is almost the same as or slightly better than their word-based versions. This underlines the fact that the character-based decoder is perhaps not fed with sufficient information to provide improved performance compared to word-based models.
Character-level decoding limits the search space by dramatically reducing the size of the target vocabulary, but at the same time widens the search space by working with characters whose sampling seems to be harder than words. The freedom in selection and sampling of characters can mislead the decoder, which prevents us from taking the maximum advantages of character-level decoding. If we can control the selection process with other constraints, we may obtain further benefit from restricting the vocabulary set, which is the main goal followed in this paper.
In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section "Proposed Architecture" provides more details on our models.
Together with different findings that will be discussed in the next sections, there are two main contributions in this paper. We redesigned and tuned the NMT framework for translating into MRLs. It is quite challenging to show the impact of external knowledge such as morphological information in neural models especially in the presence of large parallel corpora. However, our models are able to incorporate morphological information into decoding and boost its quality. We inject the decoder with morphological properties of the target language. Furthermore, the novel architecture proposed here is not limited to morphological information alone and is flexible enough to provide other types of information for the decoder.
NMT for MRLs
There are several models for NMT of MRLs which are designed to deal with morphological complexities. garcia2016factored and sennrich-haddow:2016:WMT adapted the factored machine translation approach to neural models. Morphological annotations can be treated as extra factors in such models. jean-EtAl:2015:ACL-IJCNLP proposed a model to handle very large vocabularies. luong-EtAl:2015:ACL-IJCNLP addressed the problem of rare words and OOVs with the help of a post-translation phase to exchange unknown tokens with their potential translations. sennrich2015neural used subword units for NMT. The model relies on frequent subword units instead of words. costajussa-fonollosa:2016:P16-2 designed a model for translating from MRLs. The model encodes source words with a convolutional module proposed by kim2015character. Each word is represented by a convolutional combination of its characters.
luong-manning:2016:P16-1 used a hybrid model for representing words. In their model, unseen and complex words are encoded with a character-based representation, with other words encoded via the usual surface-form embeddings. DBLP:journals/corr/VylomovaCHH16 compared different representation models (word-, morpheme, and character-level models) which try to capture complexities on the source side, for the task of translating from MRLs.
chung-cho-bengio proposed an architecture which benefits from different segmentation schemes. On the encoder side, words are segmented into subunits with the byte-pair segmentation model (bpe) BIBREF1 , and on the decoder side, one target character is produced at each time step. Accordingly, the target sequence is treated as a long chain of characters without explicit segmentation. W17-4727 focused on translating from English into Finnish and implicitly incorporated morphological information into NMT through multi-task learning. passbanPhD comprehensively studied the problem of translating MRLs and addressed potential challenges in the field.
Among all the models reviewed in this section, the network proposed by chung-cho-bengio could be seen as the best alternative for translating into MRLs as it works at the character level on the decoder side and it was evaluated in different settings on different languages. Consequently, we consider it as a baseline model in our experiments.
Proposed Architecture
We propose a compatible neural architecture for translating into MRLs. The model benefits from subword- and character-level information and improves upon the state-of-the-art model of chung-cho-bengio. We manipulated the model to incorporate morphological information and developed three new extensions, which are discussed in Sections "The Embedded Morphology Table" , "The Auxiliary Output Channel" , and "Combining the Extended Output Layer and the Embedded Morphology Table" .
The Embedded Morphology Table
In the first extension an additional table containing the morphological information of the target language is plugged into the decoder to assist with word formation. Each time the decoder samples from the target vocabulary, it searches the morphology table to find the most relevant affixes given its current state. Items selected from the table act as guiding signals to help the decoder sample a better character.
Our base model is an encoder-decoder model with attention BIBREF2 , implemented using gated recurrent units (GRUs) BIBREF3 . We use a four-layer model in our experiments. Similar to chung-cho-bengio and DBLP:journals/corr/WuSCLNMKCGMKSJL16, we use bidirectional units to encode the source sequence. Bidirectional GRUs are placed only at the input layer. The forward GRU reads the input sequence in its original order and the backward GRU reads the input in the reverse order. Each hidden state of the encoder in one time step is a concatenation of the forward and backward states at the same time step. This type of bidirectional processing provides a richer representation of the input sequence.
On the decoder side, one target character is sampled from a target vocabulary at each time step. In the original encoder-decoder model, the probability of predicting the next token $y_i$ is estimated based on $i$ ) the current hidden state of the decoder, $ii$ ) the last predicted token, and $iii$ ) the context vector. This process can be formulated as $p(y_i|y_1,...,y_{i-1},{\bf x}) = g(h_i,y_{i-1},{\bf c}_i)$ , where $g(.)$ is a softmax function, $y_i$ is the target token (to be predicted), $\textbf {x}$ is the representation of the input sequence, $h_i$ is the decoder's hidden state at the $i$ -th time step, and $i$0 indicates the context vector which is a weighted summary of the input sequence generated by the attention module. $i$1 is generated via the procedure shown in ( 3 ):
$$\begin{aligned} {\bf c}_i &= \sum _{j=1}^{n} \alpha _{ij} s_j\\ \alpha _{ij} &=\frac{\exp {(e_{ij})}}{\sum {_{k=1}^{n}\exp {(e_{ik})}}}; \hspace{5.69054pt}e_{ij}=a(s_j, h_{i-1}) \end{aligned}$$ (Eq. 3)
where $\alpha _{ij}$ denotes the weight of the $j$ -th hidden state of the encoder ( $s_j$ ) when the decoder predicts the $i$ -th target token, and $a()$ shows a combinatorial function which can be modeled through a simple feed-forward connection. $n$ is the length of the input sequence.
In our first extension, the prediction probability is conditioned on one more constraint in addition to those three existing ones, as in $p(y_i|y_1,...,y_{i-1},{\bf x}) = g(h_i,y_{i-1},{\bf c}_i, {\bf c}^m_i)$ , where ${\bf c}^m_i$ is the morphological context vector and carries information from those useful affixes which can enrich the decoder's information. ${\bf c}^m_i$ is generated via an attention module over the morphology table which works in a similar manner to word-based attention model. The attention procedure for generating ${\bf c}^m_i$ is formulated as in ( 5 ):
$$\begin{aligned} {\bf c}^m_i &= \sum _{u=1}^{|\mathcal {A}|} \beta _{iu} f_u\\ \beta _{iu} &= \frac{\exp {(e^m_{iu})}}{\sum {_{v=1}^{|\mathcal {A}|} \exp {(e_{iv})}}}; \hspace{5.69054pt}e^m_{iu}= a^m(f_u, h_{i-1}) \end{aligned}$$ (Eq. 5)
where $f_u$ represents the embedding of the $u$ -th affix ( $u$ -th column) in the morphology/affix table $\mathcal {A}$ , $\beta _{iu}$ is the weight assigned to $f_u$ when predicting the $i$ -th target token, and $a^m$ is a feed-forward connection between the morphology table and the decoder.
The attention module in general can be considered as a search mechanism, e.g. in the original encoder-decoder architecture the basic attention module finds the most relevant input words to make the prediction. In multi-modal NMT BIBREF4 , BIBREF5 an extra attention module is added to the basic one in order to search the image input to find the most relevant image segments. In our case we have a similar additional attention module which searches the morphology table.
In this scenario, the morphology table including the target language's affixes can be considered as an external knowledge repository that sends auxiliary signals which accompany the main input sequence at all time steps. Such a table certainly includes useful information for the decoder. As we are not sure which affix preserves those pieces of useful information, we use an attention module to search for the best match. The attention module over the table works as a filter which excludes irrelevant affixes and amplifies the impact of relevant ones by assigning different weights ( $\beta $ values).
The Auxiliary Output Channel
In the first scenario, we embedded a morphology table into the decoder in the hope that it can enrich sampling information. Mathematically speaking, such an architecture establishes an extra constraint for sampling and can control the decoder's predictions. However, this is not the only way of constraining the decoder. In the second scenario, we define extra supervision to the network via another predictor (output channel). The first channel is responsible for generating translations and predicts one character at each time step, and the other one tries to understand the morphological status of the decoder by predicting the morphological annotation ( $l_i$ ) of the target character.
The approach in the second scenario proposes a multi-task learning architecture, by which in one task we learn translations and in the other one morphological annotations. Therefore, all network modules –especially the last hidden layer just before the predictors– should provide information which is useful enough to make correct predictions in both channels, i.e. the decoder should preserve translation as well as morphological knowledge. Since we are translating into MRLs this type of mixed information (morphology+translation) can be quite useful.
In our setting, the morphological annotation $l_i$ predicted via the second channel shows to which part of the word or morpheme the target character belongs, i.e. the label for the character is the morpheme that includes it. We clarify the prediction procedure via an example from our training set (see Section "Experimental Study" ). When the Turkish word `terbiyesizlik' is generated, the first channel is supposed to predict t, e, r, up to k, one after another. For the same word, the second channel is supposed to predict stem-C for the fist 7 steps as the first 7 characters `terbiye' belong to the stem of the word. The C sign indicates that stem-C is a class label. The second channel should also predict siz-C when the first channel predicts s (eighth character), i (ninth character), and z (tenth character), and lik-C when the first channel samples the last three characters. Clearly, the second channel is a classifier which works over the {stem-C, siz-C, lik-C, ...} classes. Figure 1 illustrates a segment of a sentence including this Turkish word and explains which class tags should be predicted by each channel.
To implement the second scenario we require a single-source double-target training corpus: [source sentence] $\rightarrow $ [sequence of target characters $\&$ sequence of morphological annotations] (see Section "Experimental Study" ). The objective function should also be manipulated accordingly. Given a training set $\lbrace {\bf x}_t, {\bf y}_t, {\bf m}_t\rbrace _{t=1}^{T}$ the goal is to maximize the joint loss function shown in ( 7 ):
$$\lambda \sum _{t=1}^{T}\log {P({\bf y}_t|{\bf x}_t;\theta )} + (1-\lambda ) \sum _{t=1}^{T}\log {P({\bf m}_t|{\bf x}_t;\theta )}$$ (Eq. 7)
where $\textbf {x}_t$ is the $t$ -th input sentence whose translation is a sequence of target characters shown by $\textbf {y}_t$ . $\textbf {m}_t$ is the sequence of morphological annotations and $T$ is the size of the training set. $\theta $ is the set of network parameters and $\lambda $ is a scalar to balance the contribution of each cost function. $\lambda $ is adjusted on the development set during training.
Combining the Extended Output Layer and the Embedded Morphology Table
In the first scenario, we aim to provide the decoder with useful information about morphological properties of the target language, but we are not sure whether signals sent from the table are what we really need. They might be helpful or even harmful, so there should be a mechanism to control their quality. In the second scenario we also have a similar problem as the last layer requires some information to predict the correct morphological class through the second channel, but there is no guarantee to ensure that information in the decoder is sufficient for this sort of prediction. In order to address these problems, in the third extension we combine both scenarios as they are complementary and can potentially help each other.
The morphology table acts as an additional useful source of knowledge as it already consists of affixes, but its content should be adapted according to the decoder and its actual needs. Accordingly, we need a trainer to update the table properly. The extra prediction channel plays this role for us as it forces the network to predict the target language's affixes at the output layer. The error computed in the second channel is back-propagated to the network including the morphology table and updates its affix information into what the decoder actually needs for its prediction. Therefore, the second output channel helps us train better affix embeddings.
The morphology table also helps the second predictor. Without considering the table, the last layer only includes information about the input sequence and previously predicted outputs, which is not directly related to morphological information. The second attention module retrieves useful affixes from the morphology table and concatenates to the last layer, which means the decoder is explicitly fed with morphological information. Therefore, these two modules mutually help each other. The external channel helps update the morphology table with high-quality affixes (backward pass) and the table sends its high-quality signals to the prediction layer (forward pass). The relation between these modules and the NMT architecture is illustrated in Figure 2 .
Experimental Study
As previously reviewed, different models try to capture complexities on the encoder side, but to the best of our knowledge the only model which proposes a technique to deal with complex constituents on the decoder side is that of chung-cho-bengio, which should be an appropriate baseline for our comparisons. Moreover, it outperforms other existing NMT models, so we prefer to compare our network to the best existing model. This model is referred to as CDNMT in our experiments. In the next sections first we explain our experimental setting, corpora, and how we build the morphology table (Section "Experimental Setting" ), and then report experimental results (Section "Experimental Results" ).
Experimental Setting
In order to make our work comparable we try to follow the same experimental setting used in CDNMT, where the GRU size is 1024, the affix and word embedding size is 512, and the beam width is 20. Our models are trained using stochastic gradient descent with Adam BIBREF6 . chung-cho-bengio and sennrich2015neural demonstrated that bpe boosts NMT, so similar to CDNMT we also preprocess the source side of our corpora using bpe. We use WMT-15 corpora to train the models, newstest-2013 for tuning and newstest-2015 as the test sets. For English–Turkish (En–Tr) we use the OpenSubtitle2016 collection BIBREF7 . The training side of the English–German (En–De), English–Russian (En–Ru), and En–Tr corpora include $4.5$ , $2.1$ , and 4 million parallel sentences, respectively. We randomly select 3K sentences for each of the development and test sets for En–Tr. For all language pairs we keep the 400 most frequent characters as the target-side character set and replace the remainder (infrequent characters) with a specific character.
One of the key modules in our architecture is the morphology table. In order to implement it we use a look-up table whose columns include embeddings for the target language's affixes (each column represents one affix) which are updated during training. As previously mentioned, the table is intended to provide useful, morphological information so it should be initialized properly, for which we use a morphology-aware embedding-learning model. To this end, we use the neural language model of botha2014compositional in which each word is represented via a linear combination of the embeddings of its surface form and subunits, e.g. $\overrightarrow{terbiyesizlik} = \overrightarrow{terbiyesizlik} + \overrightarrow{terbiye} + \overrightarrow{siz} + \overrightarrow{lik}$ . Given a sequence of words, the neural language model tries to predict the next word, so it learns sentence-level dependencies as well as intra-word relations. The model trains surface form and subword-level embeddings which provides us with high-quality affix embeddings.
Our neural language model is a recurrent network with a single 1000-dimensional GRU layer, which is trained on the target sides of our parallel corpora. The embedding size is 512 and we use a batch size of 100 to train the model. Before training the neural language model, we need to manipulate the training corpus to decompose words into morphemes for which we use Morfessor BIBREF8 , an unsupervised morphological analyzer. Using Morfessor each word is segmented into different subunits where we consider the longest part as the stem of each word; what appears before the stem is taken as a member of the set of prefixes (there might be one or more prefixes) and what follows the stem is considered as a member of the set of suffixes.
Since Morfessor is an unsupervised analyzer, in order to minimize segmentation errors and avoid noisy results we filter its output and exclude subunits which occur fewer than 500 times. After decomposing, filtering, and separating stems from affixes, we extracted several affixes which are reported in Table 2 . We emphasize that there might be wrong segmentations in Morfessor's output, e.g. Turkish is a suffix-based language, so there are no prefixes in this language, but based on what Morfessor generated we extracted 11 different types of prefixes. We do not post-process Morfessor's outputs.
Using the neural language model we train word, stem, and affix embeddings, and initialize the look-up table (but not other parts) of the decoder using those affixes. The look-up table includes high-quality affixes trained on the target side of the parallel corpus by which we train the translation model. Clearly, such an affix table is an additional knowledge source for the decoder. It preserves information which is very close to what the decoder actually needs. However, there might be some missing pieces of information or some incompatibility between the decoder and the table, so we do not freeze the morphology table during training, but let the decoder update it with respect to its needs in the forward and backward passes.
Experimental Results
Table 3 summarizes our experimental results. We report results for the bpe $\rightarrow $ char setting, which means the source token is a bpe unit and the decoder samples a character at each time step. CDNMT is the baseline model. Table 3 includes scores reported from the original CDNMT model BIBREF0 as well as the scores from our reimplementation. To make our work comparable and show the impact of the new architecture, we tried to replicate CDNMT's results in our experimental setting, we kept everything (parameters, iterations, epochs etc.) unchanged and evaluated the extended model in the same setting. Table 3 reports BLEU scores BIBREF9 of our NMT models.
Table 3 can be interpreted from different perspectives but the main findings are summarized as follows:
The morphology table yields significant improvements for all languages and settings.
The morphology table boosts the En–Tr engine more than others and we think this is because of the nature of the language. Turkish is an agglutinative language in which morphemes are clearly separable from each other, but in German and Russian morphological transformations rely more on fusional operations rather than agglutination.
It seems that there is a direct relation between the size of the morphology table and the gain provided for the decoder, because Russian and Turkish have bigger tables and benefit from the table more than German which has fewer affixes.
The auxiliary output channel is even more useful than the morphology table for all settings but En–Ru, and we think this is because of the morpheme-per-word ratio in Russian. The number of morphemes attached to a Russian word is usually more than those of German and Turkish words in our corpora, and it makes the prediction harder for the classifier (the more the number of suffixes attached to a word, the harder the classification task).
The combination of the morphology table and the extra output channel provides the best result for all languages.
Figure 3 depicts the impact of the morphology table and the extra output channel for each language.
To further study our models' behaviour and ensure that our extensions do not generate random improvements we visualized some attention weights when generating `terbiyesizlik'. In Figure 4 , the upper figure shows attention weights for all Turkish affixes, where the y axis shows different time steps and the x axis includes attention weights of all affixes (304 columns) for those time steps, e.g. the first row and the first column represents the attention weight assigned to the first Turkish affix when sampling t in `terbiyesizlik'. While at the first glance the figure may appear to be somewhat confusing, but it provides some interesting insights which we elaborate next.
In addition to the whole attention matrix we also visualized a subset of weights to show how the morphology table provides useful information. In the second figure we study the behaviour of the morphology table for the first (t $_1$ ), fifth (i $_5$ ), ninth (i $_{9}$ ), and twelfth (i $_{12}$ ) time steps when generating the same Turkish word `t $_1$ erbi $_5$ yesi $_9$ zli $_{12}$ k'. t $_1$ is the first character of the word. We also have three i characters from different morphemes, where the first one is part of the stem, the second one belongs to the suffix `siz', and the third one to `lik'. It is interesting to see how the table reacts to the same character from different parts. For each time step we selected the top-10 affixes which have the highest attention weights. The set of top-10 affixes can be different for each step, so we made a union of those sets which gives us 22 affixes. The bottom part of Figure 4 shows the attention weights for those 22 affixes at each time step.
After analyzing the weights we observed interesting properties about the morphology table and the auxiliary attention module. The main findings about the behaviour of the table are as follows:
The model assigns high attention weights to stem-C for almost all time steps. However, the weights assigned to this class for t $_1$ and i $_5$ are much higher than those of affix characters (as they are part of the stem). The vertical lines in both figures approve this feature (bad behaviour).
For some unknown reasons there are some affixes which have no direct relation to that particulate time step but they receive a high attention, such as maz in t $_{12}$ (bad behaviour).
For almost all time steps the highest attention weight belongs to the class which is expected to be selected, e.g. weights for (i $_5$ ,stem-C) or (i $_{9}$ ,siz-C) (good behaviour).
The morphology table may send bad or good signals but it is consistent for similar or co-occurring characters, e.g. for the last three time steps l $_{11}$ , i $_{12}$ , and k $_{13}$ , almost the same set of affixes receives the highest attention weights. This consistency is exactly what we are looking for, as it can define a reliable external constraint for the decoder to guide it. Vertical lines on the figure also confirm this fact. They show that for a set of consecutive characters which belong to the same morpheme the attention module sends a signal from a particular affix (good behaviour).
There are some affixes which might not be directly related to that time step but receive high attention weights. This is because those affixes either include the same character which the decoder tries to predict (e.g. i-C for i $_{4}$ or t-C and tin-C for t $_{1}$ ), or frequently appear with that part of the word which includes the target character (e.g. mi-C has a high weight when predicting t $_1$ because t $_1$ belongs to terbiye which frequently collocates with mi-C: terbiye+mi) (good behaviour).
Finally, in order to complete our evaluation study we feed the English-to-German NMT model with the sentence `Terms and conditions for sending contributions to the BBC', to show how the model behaves differently and generates a better target sentence. Translations generated by our models are illustrated in Table 4 .
The table demonstrates that our architecture is able to control the decoder and limit its selections, e.g. the word `allgemeinen' generated by the baseline model is redundant. There is no constraint to inform the baseline model that this word should not be generated, whereas our proposed architecture controls the decoder in such situations. After analyzing our model, we realized that there are strong attention weights assigned to the w-space (indicating white space characters) and BOS (beginning of the sequence) columns of the affix table while sampling the first character of the word `Geschäft', which shows that the decoder is informed about the start point of the sequence. Similar to the baseline model's decoder, our decoder can sample any character including `a' of `allgemeinen' or `G' of `Geschäft'. Translation information stored in the baseline decoder is not sufficient for selecting the right character `G', so the decoder wrongly starts with `i' and continues along a wrong path up to generating the whole word. However, our decoder's information is accompanied with signals from the affix table which force it to start with a better initial character, whose sampling leads to generating the correct target word.
Another interesting feature about the table is the new structure `Geschäft s bedingungen' generated by the improved model. As the reference translation shows, in the correct form these two structures should be glued together via `s', which can be considered as an infix. As our model is supposed to detect this sort of intra-word relation, it treats the whole structure as two compounds which are connected to one another via an infix. Although this is not a correct translation and it would be trivial to post-edit into the correct output form, it is interesting to see how our mechanism forces the decoder to pay attention to intra-word relations.
Apart from these two interesting findings, the number of wrong character selections in the baseline model is considerably reduced in the improved model because of our enhanced architecture.
Conclusion and Future Work
In this paper we proposed a new architecture to incorporate morphological information into the NMT pipeline. We extended the state-of-the-art NMT model BIBREF0 with a morphology table. The table could be considered as an external knowledge source which is helpful as it increases the capacity of the model by increasing the number of network parameters. We tried to benefit from this advantage. Moreover, we managed to fill the table with morphological information to further boost the NMT model when translating into MRLs. Apart from the table we also designed an additional output channel which forces the decoder to predict morphological annotations. The error signals coming from the second channel during training inform the decoder with morphological properties of the target language. Experimental results show that our techniques were useful for NMT of MRLs.
As our future work we follow three main ideas. $i$ ) We try to find more efficient ways to supply morphological information for both the encoder and decoder. $ii$ ) We plan to benefit from other types of information such as syntactic and semantic annotations to boost the decoder, as the table is not limited to morphological information alone and can preserve other sorts of information. $iii$ ) Finally, we target sequence generation for fusional languages. Although our model showed significant improvements for both German and Russian, the proposed model is more suitable for generating sequences in agglutinative languages.
Acknowledgments
We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for high-end computing (www.ichec.ie) for providing computational infrastructures. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. | an additional morphology table including target-side affixes., We inject the decoder with morphological properties of the target language. |
b7fe91e71da8f4dc11e799b3bd408d253230e8c6 | b7fe91e71da8f4dc11e799b3bd408d253230e8c6_0 | Q: What type of morphological information is contained in the "morphology table"?
Text: Introduction
Morphologically complex words (MCWs) are multi-layer structures which consist of different subunits, each of which carries semantic information and has a specific syntactic role. Table 1 gives a Turkish example to show this type of complexity. This example is a clear indication that word-based models are not suitable to process such complex languages. Accordingly, when translating MRLs, it might not be a good idea to treat words as atomic units as it demands a large vocabulary that imposes extra overhead. Since MCWs can appear in various forms we require a very large vocabulary to $i$ ) cover as many morphological forms and words as we can, and $ii$ ) reduce the number of OOVs. Neural models by their nature are complex, and we do not want to make them more complicated by working with large vocabularies. Furthermore, even if we have quite a large vocabulary set, clearly some words would remain uncovered by that. This means that a large vocabulary not only complicates the entire process, but also does not necessarily mitigate the OOV problem. For these reasons we propose an NMT engine which works at the character level.
In this paper, we focus on translating into MRLs and issues associated with word formation on the target side. To provide a better translation we do not necessarily need a large target lexicon, as an MCW can be gradually formed during decoding by means of its subunits, similar to the solution proposed in character-based decoding models BIBREF0 . Generating a complex word character-by-character is a better approach compared to word-level sampling, but it has other disadvantages.
One character can co-occur with another with almost no constraint, but a particular word or morpheme can only collocate with a very limited number of other constituents. Unlike words, characters are not meaning-bearing units and do not preserve syntactic information, so (in the extreme case) the chance of sampling each character by the decoder is almost equal to the others, but this situation is less likely for words. The only constraint that prioritize which character should be sampled is information stored in the decoder, which we believe is insufficient to cope with all ambiguities. Furthermore, when everything is segmented into characters the target sentence with a limited number of words is changed to a very long sequence of characters, which clearly makes it harder for the decoder to remember such a long history. Accordingly, character-based information flows in the decoder may not be as informative as word- or morpheme-based information.
In the character-based NMT model everything is almost the same as its word-based counterpart except the target vocabulary whose size is considerably reduced from thousands of words to just hundreds of characters. If we consider the decoder as a classifier, it should in principle be able to perform much better over hundreds of classes (characters) rather than thousands (words), but the performance of character-based models is almost the same as or slightly better than their word-based versions. This underlines the fact that the character-based decoder is perhaps not fed with sufficient information to provide improved performance compared to word-based models.
Character-level decoding limits the search space by dramatically reducing the size of the target vocabulary, but at the same time widens the search space by working with characters whose sampling seems to be harder than words. The freedom in selection and sampling of characters can mislead the decoder, which prevents us from taking the maximum advantages of character-level decoding. If we can control the selection process with other constraints, we may obtain further benefit from restricting the vocabulary set, which is the main goal followed in this paper.
In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section "Proposed Architecture" provides more details on our models.
Together with different findings that will be discussed in the next sections, there are two main contributions in this paper. We redesigned and tuned the NMT framework for translating into MRLs. It is quite challenging to show the impact of external knowledge such as morphological information in neural models especially in the presence of large parallel corpora. However, our models are able to incorporate morphological information into decoding and boost its quality. We inject the decoder with morphological properties of the target language. Furthermore, the novel architecture proposed here is not limited to morphological information alone and is flexible enough to provide other types of information for the decoder.
NMT for MRLs
There are several models for NMT of MRLs which are designed to deal with morphological complexities. garcia2016factored and sennrich-haddow:2016:WMT adapted the factored machine translation approach to neural models. Morphological annotations can be treated as extra factors in such models. jean-EtAl:2015:ACL-IJCNLP proposed a model to handle very large vocabularies. luong-EtAl:2015:ACL-IJCNLP addressed the problem of rare words and OOVs with the help of a post-translation phase to exchange unknown tokens with their potential translations. sennrich2015neural used subword units for NMT. The model relies on frequent subword units instead of words. costajussa-fonollosa:2016:P16-2 designed a model for translating from MRLs. The model encodes source words with a convolutional module proposed by kim2015character. Each word is represented by a convolutional combination of its characters.
luong-manning:2016:P16-1 used a hybrid model for representing words. In their model, unseen and complex words are encoded with a character-based representation, with other words encoded via the usual surface-form embeddings. DBLP:journals/corr/VylomovaCHH16 compared different representation models (word-, morpheme, and character-level models) which try to capture complexities on the source side, for the task of translating from MRLs.
chung-cho-bengio proposed an architecture which benefits from different segmentation schemes. On the encoder side, words are segmented into subunits with the byte-pair segmentation model (bpe) BIBREF1 , and on the decoder side, one target character is produced at each time step. Accordingly, the target sequence is treated as a long chain of characters without explicit segmentation. W17-4727 focused on translating from English into Finnish and implicitly incorporated morphological information into NMT through multi-task learning. passbanPhD comprehensively studied the problem of translating MRLs and addressed potential challenges in the field.
Among all the models reviewed in this section, the network proposed by chung-cho-bengio could be seen as the best alternative for translating into MRLs as it works at the character level on the decoder side and it was evaluated in different settings on different languages. Consequently, we consider it as a baseline model in our experiments.
Proposed Architecture
We propose a compatible neural architecture for translating into MRLs. The model benefits from subword- and character-level information and improves upon the state-of-the-art model of chung-cho-bengio. We manipulated the model to incorporate morphological information and developed three new extensions, which are discussed in Sections "The Embedded Morphology Table" , "The Auxiliary Output Channel" , and "Combining the Extended Output Layer and the Embedded Morphology Table" .
The Embedded Morphology Table
In the first extension an additional table containing the morphological information of the target language is plugged into the decoder to assist with word formation. Each time the decoder samples from the target vocabulary, it searches the morphology table to find the most relevant affixes given its current state. Items selected from the table act as guiding signals to help the decoder sample a better character.
Our base model is an encoder-decoder model with attention BIBREF2 , implemented using gated recurrent units (GRUs) BIBREF3 . We use a four-layer model in our experiments. Similar to chung-cho-bengio and DBLP:journals/corr/WuSCLNMKCGMKSJL16, we use bidirectional units to encode the source sequence. Bidirectional GRUs are placed only at the input layer. The forward GRU reads the input sequence in its original order and the backward GRU reads the input in the reverse order. Each hidden state of the encoder in one time step is a concatenation of the forward and backward states at the same time step. This type of bidirectional processing provides a richer representation of the input sequence.
On the decoder side, one target character is sampled from a target vocabulary at each time step. In the original encoder-decoder model, the probability of predicting the next token $y_i$ is estimated based on $i$ ) the current hidden state of the decoder, $ii$ ) the last predicted token, and $iii$ ) the context vector. This process can be formulated as $p(y_i|y_1,...,y_{i-1},{\bf x}) = g(h_i,y_{i-1},{\bf c}_i)$ , where $g(.)$ is a softmax function, $y_i$ is the target token (to be predicted), $\textbf {x}$ is the representation of the input sequence, $h_i$ is the decoder's hidden state at the $i$ -th time step, and $i$0 indicates the context vector which is a weighted summary of the input sequence generated by the attention module. $i$1 is generated via the procedure shown in ( 3 ):
$$\begin{aligned} {\bf c}_i &= \sum _{j=1}^{n} \alpha _{ij} s_j\\ \alpha _{ij} &=\frac{\exp {(e_{ij})}}{\sum {_{k=1}^{n}\exp {(e_{ik})}}}; \hspace{5.69054pt}e_{ij}=a(s_j, h_{i-1}) \end{aligned}$$ (Eq. 3)
where $\alpha _{ij}$ denotes the weight of the $j$ -th hidden state of the encoder ( $s_j$ ) when the decoder predicts the $i$ -th target token, and $a()$ shows a combinatorial function which can be modeled through a simple feed-forward connection. $n$ is the length of the input sequence.
In our first extension, the prediction probability is conditioned on one more constraint in addition to those three existing ones, as in $p(y_i|y_1,...,y_{i-1},{\bf x}) = g(h_i,y_{i-1},{\bf c}_i, {\bf c}^m_i)$ , where ${\bf c}^m_i$ is the morphological context vector and carries information from those useful affixes which can enrich the decoder's information. ${\bf c}^m_i$ is generated via an attention module over the morphology table which works in a similar manner to word-based attention model. The attention procedure for generating ${\bf c}^m_i$ is formulated as in ( 5 ):
$$\begin{aligned} {\bf c}^m_i &= \sum _{u=1}^{|\mathcal {A}|} \beta _{iu} f_u\\ \beta _{iu} &= \frac{\exp {(e^m_{iu})}}{\sum {_{v=1}^{|\mathcal {A}|} \exp {(e_{iv})}}}; \hspace{5.69054pt}e^m_{iu}= a^m(f_u, h_{i-1}) \end{aligned}$$ (Eq. 5)
where $f_u$ represents the embedding of the $u$ -th affix ( $u$ -th column) in the morphology/affix table $\mathcal {A}$ , $\beta _{iu}$ is the weight assigned to $f_u$ when predicting the $i$ -th target token, and $a^m$ is a feed-forward connection between the morphology table and the decoder.
The attention module in general can be considered as a search mechanism, e.g. in the original encoder-decoder architecture the basic attention module finds the most relevant input words to make the prediction. In multi-modal NMT BIBREF4 , BIBREF5 an extra attention module is added to the basic one in order to search the image input to find the most relevant image segments. In our case we have a similar additional attention module which searches the morphology table.
In this scenario, the morphology table including the target language's affixes can be considered as an external knowledge repository that sends auxiliary signals which accompany the main input sequence at all time steps. Such a table certainly includes useful information for the decoder. As we are not sure which affix preserves those pieces of useful information, we use an attention module to search for the best match. The attention module over the table works as a filter which excludes irrelevant affixes and amplifies the impact of relevant ones by assigning different weights ( $\beta $ values).
The Auxiliary Output Channel
In the first scenario, we embedded a morphology table into the decoder in the hope that it can enrich sampling information. Mathematically speaking, such an architecture establishes an extra constraint for sampling and can control the decoder's predictions. However, this is not the only way of constraining the decoder. In the second scenario, we define extra supervision to the network via another predictor (output channel). The first channel is responsible for generating translations and predicts one character at each time step, and the other one tries to understand the morphological status of the decoder by predicting the morphological annotation ( $l_i$ ) of the target character.
The approach in the second scenario proposes a multi-task learning architecture, by which in one task we learn translations and in the other one morphological annotations. Therefore, all network modules –especially the last hidden layer just before the predictors– should provide information which is useful enough to make correct predictions in both channels, i.e. the decoder should preserve translation as well as morphological knowledge. Since we are translating into MRLs this type of mixed information (morphology+translation) can be quite useful.
In our setting, the morphological annotation $l_i$ predicted via the second channel shows to which part of the word or morpheme the target character belongs, i.e. the label for the character is the morpheme that includes it. We clarify the prediction procedure via an example from our training set (see Section "Experimental Study" ). When the Turkish word `terbiyesizlik' is generated, the first channel is supposed to predict t, e, r, up to k, one after another. For the same word, the second channel is supposed to predict stem-C for the fist 7 steps as the first 7 characters `terbiye' belong to the stem of the word. The C sign indicates that stem-C is a class label. The second channel should also predict siz-C when the first channel predicts s (eighth character), i (ninth character), and z (tenth character), and lik-C when the first channel samples the last three characters. Clearly, the second channel is a classifier which works over the {stem-C, siz-C, lik-C, ...} classes. Figure 1 illustrates a segment of a sentence including this Turkish word and explains which class tags should be predicted by each channel.
To implement the second scenario we require a single-source double-target training corpus: [source sentence] $\rightarrow $ [sequence of target characters $\&$ sequence of morphological annotations] (see Section "Experimental Study" ). The objective function should also be manipulated accordingly. Given a training set $\lbrace {\bf x}_t, {\bf y}_t, {\bf m}_t\rbrace _{t=1}^{T}$ the goal is to maximize the joint loss function shown in ( 7 ):
$$\lambda \sum _{t=1}^{T}\log {P({\bf y}_t|{\bf x}_t;\theta )} + (1-\lambda ) \sum _{t=1}^{T}\log {P({\bf m}_t|{\bf x}_t;\theta )}$$ (Eq. 7)
where $\textbf {x}_t$ is the $t$ -th input sentence whose translation is a sequence of target characters shown by $\textbf {y}_t$ . $\textbf {m}_t$ is the sequence of morphological annotations and $T$ is the size of the training set. $\theta $ is the set of network parameters and $\lambda $ is a scalar to balance the contribution of each cost function. $\lambda $ is adjusted on the development set during training.
Combining the Extended Output Layer and the Embedded Morphology Table
In the first scenario, we aim to provide the decoder with useful information about morphological properties of the target language, but we are not sure whether signals sent from the table are what we really need. They might be helpful or even harmful, so there should be a mechanism to control their quality. In the second scenario we also have a similar problem as the last layer requires some information to predict the correct morphological class through the second channel, but there is no guarantee to ensure that information in the decoder is sufficient for this sort of prediction. In order to address these problems, in the third extension we combine both scenarios as they are complementary and can potentially help each other.
The morphology table acts as an additional useful source of knowledge as it already consists of affixes, but its content should be adapted according to the decoder and its actual needs. Accordingly, we need a trainer to update the table properly. The extra prediction channel plays this role for us as it forces the network to predict the target language's affixes at the output layer. The error computed in the second channel is back-propagated to the network including the morphology table and updates its affix information into what the decoder actually needs for its prediction. Therefore, the second output channel helps us train better affix embeddings.
The morphology table also helps the second predictor. Without considering the table, the last layer only includes information about the input sequence and previously predicted outputs, which is not directly related to morphological information. The second attention module retrieves useful affixes from the morphology table and concatenates to the last layer, which means the decoder is explicitly fed with morphological information. Therefore, these two modules mutually help each other. The external channel helps update the morphology table with high-quality affixes (backward pass) and the table sends its high-quality signals to the prediction layer (forward pass). The relation between these modules and the NMT architecture is illustrated in Figure 2 .
Experimental Study
As previously reviewed, different models try to capture complexities on the encoder side, but to the best of our knowledge the only model which proposes a technique to deal with complex constituents on the decoder side is that of chung-cho-bengio, which should be an appropriate baseline for our comparisons. Moreover, it outperforms other existing NMT models, so we prefer to compare our network to the best existing model. This model is referred to as CDNMT in our experiments. In the next sections first we explain our experimental setting, corpora, and how we build the morphology table (Section "Experimental Setting" ), and then report experimental results (Section "Experimental Results" ).
Experimental Setting
In order to make our work comparable we try to follow the same experimental setting used in CDNMT, where the GRU size is 1024, the affix and word embedding size is 512, and the beam width is 20. Our models are trained using stochastic gradient descent with Adam BIBREF6 . chung-cho-bengio and sennrich2015neural demonstrated that bpe boosts NMT, so similar to CDNMT we also preprocess the source side of our corpora using bpe. We use WMT-15 corpora to train the models, newstest-2013 for tuning and newstest-2015 as the test sets. For English–Turkish (En–Tr) we use the OpenSubtitle2016 collection BIBREF7 . The training side of the English–German (En–De), English–Russian (En–Ru), and En–Tr corpora include $4.5$ , $2.1$ , and 4 million parallel sentences, respectively. We randomly select 3K sentences for each of the development and test sets for En–Tr. For all language pairs we keep the 400 most frequent characters as the target-side character set and replace the remainder (infrequent characters) with a specific character.
One of the key modules in our architecture is the morphology table. In order to implement it we use a look-up table whose columns include embeddings for the target language's affixes (each column represents one affix) which are updated during training. As previously mentioned, the table is intended to provide useful, morphological information so it should be initialized properly, for which we use a morphology-aware embedding-learning model. To this end, we use the neural language model of botha2014compositional in which each word is represented via a linear combination of the embeddings of its surface form and subunits, e.g. $\overrightarrow{terbiyesizlik} = \overrightarrow{terbiyesizlik} + \overrightarrow{terbiye} + \overrightarrow{siz} + \overrightarrow{lik}$ . Given a sequence of words, the neural language model tries to predict the next word, so it learns sentence-level dependencies as well as intra-word relations. The model trains surface form and subword-level embeddings which provides us with high-quality affix embeddings.
Our neural language model is a recurrent network with a single 1000-dimensional GRU layer, which is trained on the target sides of our parallel corpora. The embedding size is 512 and we use a batch size of 100 to train the model. Before training the neural language model, we need to manipulate the training corpus to decompose words into morphemes for which we use Morfessor BIBREF8 , an unsupervised morphological analyzer. Using Morfessor each word is segmented into different subunits where we consider the longest part as the stem of each word; what appears before the stem is taken as a member of the set of prefixes (there might be one or more prefixes) and what follows the stem is considered as a member of the set of suffixes.
Since Morfessor is an unsupervised analyzer, in order to minimize segmentation errors and avoid noisy results we filter its output and exclude subunits which occur fewer than 500 times. After decomposing, filtering, and separating stems from affixes, we extracted several affixes which are reported in Table 2 . We emphasize that there might be wrong segmentations in Morfessor's output, e.g. Turkish is a suffix-based language, so there are no prefixes in this language, but based on what Morfessor generated we extracted 11 different types of prefixes. We do not post-process Morfessor's outputs.
Using the neural language model we train word, stem, and affix embeddings, and initialize the look-up table (but not other parts) of the decoder using those affixes. The look-up table includes high-quality affixes trained on the target side of the parallel corpus by which we train the translation model. Clearly, such an affix table is an additional knowledge source for the decoder. It preserves information which is very close to what the decoder actually needs. However, there might be some missing pieces of information or some incompatibility between the decoder and the table, so we do not freeze the morphology table during training, but let the decoder update it with respect to its needs in the forward and backward passes.
Experimental Results
Table 3 summarizes our experimental results. We report results for the bpe $\rightarrow $ char setting, which means the source token is a bpe unit and the decoder samples a character at each time step. CDNMT is the baseline model. Table 3 includes scores reported from the original CDNMT model BIBREF0 as well as the scores from our reimplementation. To make our work comparable and show the impact of the new architecture, we tried to replicate CDNMT's results in our experimental setting, we kept everything (parameters, iterations, epochs etc.) unchanged and evaluated the extended model in the same setting. Table 3 reports BLEU scores BIBREF9 of our NMT models.
Table 3 can be interpreted from different perspectives but the main findings are summarized as follows:
The morphology table yields significant improvements for all languages and settings.
The morphology table boosts the En–Tr engine more than others and we think this is because of the nature of the language. Turkish is an agglutinative language in which morphemes are clearly separable from each other, but in German and Russian morphological transformations rely more on fusional operations rather than agglutination.
It seems that there is a direct relation between the size of the morphology table and the gain provided for the decoder, because Russian and Turkish have bigger tables and benefit from the table more than German which has fewer affixes.
The auxiliary output channel is even more useful than the morphology table for all settings but En–Ru, and we think this is because of the morpheme-per-word ratio in Russian. The number of morphemes attached to a Russian word is usually more than those of German and Turkish words in our corpora, and it makes the prediction harder for the classifier (the more the number of suffixes attached to a word, the harder the classification task).
The combination of the morphology table and the extra output channel provides the best result for all languages.
Figure 3 depicts the impact of the morphology table and the extra output channel for each language.
To further study our models' behaviour and ensure that our extensions do not generate random improvements we visualized some attention weights when generating `terbiyesizlik'. In Figure 4 , the upper figure shows attention weights for all Turkish affixes, where the y axis shows different time steps and the x axis includes attention weights of all affixes (304 columns) for those time steps, e.g. the first row and the first column represents the attention weight assigned to the first Turkish affix when sampling t in `terbiyesizlik'. While at the first glance the figure may appear to be somewhat confusing, but it provides some interesting insights which we elaborate next.
In addition to the whole attention matrix we also visualized a subset of weights to show how the morphology table provides useful information. In the second figure we study the behaviour of the morphology table for the first (t $_1$ ), fifth (i $_5$ ), ninth (i $_{9}$ ), and twelfth (i $_{12}$ ) time steps when generating the same Turkish word `t $_1$ erbi $_5$ yesi $_9$ zli $_{12}$ k'. t $_1$ is the first character of the word. We also have three i characters from different morphemes, where the first one is part of the stem, the second one belongs to the suffix `siz', and the third one to `lik'. It is interesting to see how the table reacts to the same character from different parts. For each time step we selected the top-10 affixes which have the highest attention weights. The set of top-10 affixes can be different for each step, so we made a union of those sets which gives us 22 affixes. The bottom part of Figure 4 shows the attention weights for those 22 affixes at each time step.
After analyzing the weights we observed interesting properties about the morphology table and the auxiliary attention module. The main findings about the behaviour of the table are as follows:
The model assigns high attention weights to stem-C for almost all time steps. However, the weights assigned to this class for t $_1$ and i $_5$ are much higher than those of affix characters (as they are part of the stem). The vertical lines in both figures approve this feature (bad behaviour).
For some unknown reasons there are some affixes which have no direct relation to that particulate time step but they receive a high attention, such as maz in t $_{12}$ (bad behaviour).
For almost all time steps the highest attention weight belongs to the class which is expected to be selected, e.g. weights for (i $_5$ ,stem-C) or (i $_{9}$ ,siz-C) (good behaviour).
The morphology table may send bad or good signals but it is consistent for similar or co-occurring characters, e.g. for the last three time steps l $_{11}$ , i $_{12}$ , and k $_{13}$ , almost the same set of affixes receives the highest attention weights. This consistency is exactly what we are looking for, as it can define a reliable external constraint for the decoder to guide it. Vertical lines on the figure also confirm this fact. They show that for a set of consecutive characters which belong to the same morpheme the attention module sends a signal from a particular affix (good behaviour).
There are some affixes which might not be directly related to that time step but receive high attention weights. This is because those affixes either include the same character which the decoder tries to predict (e.g. i-C for i $_{4}$ or t-C and tin-C for t $_{1}$ ), or frequently appear with that part of the word which includes the target character (e.g. mi-C has a high weight when predicting t $_1$ because t $_1$ belongs to terbiye which frequently collocates with mi-C: terbiye+mi) (good behaviour).
Finally, in order to complete our evaluation study we feed the English-to-German NMT model with the sentence `Terms and conditions for sending contributions to the BBC', to show how the model behaves differently and generates a better target sentence. Translations generated by our models are illustrated in Table 4 .
The table demonstrates that our architecture is able to control the decoder and limit its selections, e.g. the word `allgemeinen' generated by the baseline model is redundant. There is no constraint to inform the baseline model that this word should not be generated, whereas our proposed architecture controls the decoder in such situations. After analyzing our model, we realized that there are strong attention weights assigned to the w-space (indicating white space characters) and BOS (beginning of the sequence) columns of the affix table while sampling the first character of the word `Geschäft', which shows that the decoder is informed about the start point of the sequence. Similar to the baseline model's decoder, our decoder can sample any character including `a' of `allgemeinen' or `G' of `Geschäft'. Translation information stored in the baseline decoder is not sufficient for selecting the right character `G', so the decoder wrongly starts with `i' and continues along a wrong path up to generating the whole word. However, our decoder's information is accompanied with signals from the affix table which force it to start with a better initial character, whose sampling leads to generating the correct target word.
Another interesting feature about the table is the new structure `Geschäft s bedingungen' generated by the improved model. As the reference translation shows, in the correct form these two structures should be glued together via `s', which can be considered as an infix. As our model is supposed to detect this sort of intra-word relation, it treats the whole structure as two compounds which are connected to one another via an infix. Although this is not a correct translation and it would be trivial to post-edit into the correct output form, it is interesting to see how our mechanism forces the decoder to pay attention to intra-word relations.
Apart from these two interesting findings, the number of wrong character selections in the baseline model is considerably reduced in the improved model because of our enhanced architecture.
Conclusion and Future Work
In this paper we proposed a new architecture to incorporate morphological information into the NMT pipeline. We extended the state-of-the-art NMT model BIBREF0 with a morphology table. The table could be considered as an external knowledge source which is helpful as it increases the capacity of the model by increasing the number of network parameters. We tried to benefit from this advantage. Moreover, we managed to fill the table with morphological information to further boost the NMT model when translating into MRLs. Apart from the table we also designed an additional output channel which forces the decoder to predict morphological annotations. The error signals coming from the second channel during training inform the decoder with morphological properties of the target language. Experimental results show that our techniques were useful for NMT of MRLs.
As our future work we follow three main ideas. $i$ ) We try to find more efficient ways to supply morphological information for both the encoder and decoder. $ii$ ) We plan to benefit from other types of information such as syntactic and semantic annotations to boost the decoder, as the table is not limited to morphological information alone and can preserve other sorts of information. $iii$ ) Finally, we target sequence generation for fusional languages. Although our model showed significant improvements for both German and Russian, the proposed model is more suitable for generating sequences in agglutinative languages.
Acknowledgments
We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for high-end computing (www.ichec.ie) for providing computational infrastructures. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. | target-side affixes |
16fa6896cf4597154363a6c9a98deb49fffef15f | 16fa6896cf4597154363a6c9a98deb49fffef15f_0 | Q: Do they report results only on English data?
Text: Background
Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.
First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster.
Data
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.
The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.
To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples.
Labeling Methodology
Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .
The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 .
Outcome Analysis
Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.
In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .
From these mapping exercises, we claim that our “influential tweet" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study.
Results & Discussion
In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.
We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.
There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. | Yes |
0f60864503ecfd5b048258e21d548ab5e5e81772 | 0f60864503ecfd5b048258e21d548ab5e5e81772_0 | Q: Do the authors mention any confounds to their study?
Text: Background
Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.
First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster.
Data
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.
The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.
To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples.
Labeling Methodology
Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .
The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 .
Outcome Analysis
Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.
In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .
From these mapping exercises, we claim that our “influential tweet" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study.
Results & Discussion
In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.
We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.
There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. | No |
fe578842021ccfc295209a28cf2275ca18f8d155 | fe578842021ccfc295209a28cf2275ca18f8d155_0 | Q: Which machine learning models are used?
Text: Background
Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.
First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster.
Data
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.
The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.
To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples.
Labeling Methodology
Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .
The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 .
Outcome Analysis
Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.
In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .
From these mapping exercises, we claim that our “influential tweet" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study.
Results & Discussion
In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.
We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.
There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. | RNNs, CNNs, Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel |
00ef9cc1d1d60f875969094bb246be529373cb1d | 00ef9cc1d1d60f875969094bb246be529373cb1d_0 | Q: What methodology is used to compensate for limited labelled data?
Text: Background
Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.
First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster.
Data
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.
The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.
To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples.
Labeling Methodology
Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .
The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 .
Outcome Analysis
Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.
In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .
From these mapping exercises, we claim that our “influential tweet" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study.
Results & Discussion
In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.
We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.
There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. | Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets. |
279b633b90fa2fd69e84726090fadb42ebdf4c02 | 279b633b90fa2fd69e84726090fadb42ebdf4c02_0 | Q: Which five natural disasters were examined?
Text: Background
Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.
First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster.
Data
We henceforth refer to a tweet affirming climate change as a “positive" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change" or “global warming", and further included disaster-specific search terms (e.g., “bomb cyclone," “blizzard," “snowstorm," etc.). We refer to the first data batch as “influential" tweets, and the second data batch as “event-related" tweets.
The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.
The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.
To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples.
Labeling Methodology
Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .
The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 .
Outcome Analysis
Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.
In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .
From these mapping exercises, we claim that our “influential tweet" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study.
Results & Discussion
In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.
We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.
There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters. | the East Coast Bomb Cyclone, the Mendocino, California wildfires, Hurricane Florence, Hurricane Michael, the California Camp Fires |
0106bd9d54e2f343cc5f30bb09a5dbdd171e964b | 0106bd9d54e2f343cc5f30bb09a5dbdd171e964b_0 | Q: Which social media platform is explored?
Text: Introduction
A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.
The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example "Micheal Jordan" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude "Micheal Jordan" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6.
The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently.
An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8.
To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10.
In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media.
The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article.
Related Work
Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.
Named entity is formally defined as a word or phrase that clearly identifies an item from set of other similar items BIBREF11, BIBREF12. Equation DISPLAY_FORM2 expresses a sequence of tokens.
From this equation, the NER task is defined as recognition of tokens that correspond to interesting items. These items from natural language processing perspective are known as named entity categories; BIO2 proposes four major categories, namely, organization, person, location and miscellaneous BIBREF13. From the biomedical domain, gene, protein, drug and disease names are known as named entities BIBREF14, BIBREF15. Output of NER task is formulated in . $I_s\in [1,N]$ and $I_e\in [1,N]$ is the start and end indices of each named entity and $t$ is named entity type BIBREF16.
BIO2 tagging for named entity recognition is defined in equation . Table TABREF3 shows BIO2 tags and their respective meanings; B and I indicate beginning and inside of the entity respectively, while O shows the outside of it. Even though many tagging standards have been proposed for NER task, BIO is the foremost accepted by many real world applications BIBREF17.
A named entity recognizer gets $s$ as input and provides entity tags for each token. This sequential process requires information from the whole sentence rather than only tokens and for that reason, it is also considered to be a sequence tagging problem. Another analogous problem to this issue is part of speech tagging and some methods are capable of doing both. However, in cases where noise is present and input sequence has linguistic typos, many methods fail to overcome the problem. As an example, consider a sequence of tokens where a new token invented by social media users gets trended. This trending new word is misspelled and is used in a sequence along with other tokens in which the whole sequence does not follow known linguistic grammar. For this special case, classical methods and those which use engineered features do not perform well.
Using the sequence $s$ itself or adding more information to it divides two approaches to overcome this problem: unimodal and multimodal.
Although many approaches for NER have been proposed and reviewing them is not in the scope of this article, we focus on foremost analogues classical and deep learning approaches for named entity recognition in two subsections. In subsection SECREF4 unimodal approaches for named entity recognition are presented while in subsection SECREF7 emerging multimodal solutions are described.
Related Work ::: Unimodal Named Entity Recognition
The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.
where $\phi $ is the potential function.
CRF finds the most probable likelihood by modeling the input sequence of tokens $s$ as a normalized product of feature functions. In a simpler explanation, CRF outputs the most probable tags that follow each other. For example it is more likely to have an I-PER, O or any other that that starts with B- after B-PER rather than encountering tags that start with I-.
T-NER is another approach that is specifically aimed to answer NER task in twitter BIBREF19. A set of algorithms in their original work have been published to answer tasks such as POS (part of speech tagging), named entity segmentation and NER. Labeled LDA has been used by the authors in order to outperform baseline in BIBREF20 for NER task. Their approach strongly relies on dictionary, contextual and orthographic features.
Deep learning techniques use distributed word or character representation rather than raw one-hot vectors. Most of this research in NLP field use pretrained word embeddings such as Word2Vec BIBREF21, GloVe BIBREF22 or fastText BIBREF23. These low dimensional real valued dense vectors have proved to provide better representation for words compared to one-hot vector or other space vector models.
The combination of word embedding along with bidirectional long-short term memory (LSTM) neural networks are examined in BIBREF24. The authors also propose to add a CRF layer at the end of their neural network architecture in order to preserve output tag relativity. Utilization of recurrent neural networks (RNN) provides better sequential modeling over data. However, only using sequential information does not result in major improvements because these networks tend to rely on the most recent tokens. Instead of using RNN, authors used LSTM. The long and short term memory capability of these networks helps them to keep in memory what is important and forget what is not necessary to remember. Equation DISPLAY_FORM6 formulates forget-gate of an LSTM neural network, eq. shows input-gate, eq. notes output-gate and eq. presents memory-cell. Finally, eq. shows the hidden part of an LSTM unit BIBREF25, BIBREF26.
for all these equations, $\sigma $ is activation function (sigmoid or tanh are commonly used for LSTM) and $\circ $ is concatenation operation. $W$ and $U$ are weights and $b$ is the bias which should be learned over training process.
LSTM is useful for capturing the relation of tokens in a forward sequential form, however in natural language processing tasks, it is required to know the upcoming token. To overcome this problem, the authors have used a backward and forward LSTM combining output of both.
In a different approach, character embedding followed by a convolution layer is proposed in BIBREF27 for sequence labeling. The utilized architecture is followed by a bidirectional LSTM layer that ends in a CRF layer. Character embedding is a useful technique that the authors tried to use it in a combination with word embedding. Character embedding with the use of convolution as feature extractor from character level, captures relations between characters that form a word and reduces spelling noise. It also helps the model to have an embedding when pretrained word embedding is empty or initialized as random for new words. These words are encountered when they were not present in the training set, thus, in the test phase, model fails to provide a useful embedding.
Related Work ::: Multimodal Named Entity Recognition
Multimodal learning has become an emerging research interest and with the rise of deep learning techniques, it has become more visible in different research areas ranging from medical imaging to image segmentation and natural language processing BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF9, BIBREF37, BIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF45. On the other hand, very little research has been focused on the extraction of named entities with joint image and textual data concerning short and noisy content BIBREF46, BIBREF47, BIBREF9, BIBREF8 while several studies have been explored in textual named entity recognition using neural models BIBREF48, BIBREF49, BIBREF24, BIBREF50, BIBREF27, BIBREF51, BIBREF10, BIBREF52.
State-of-the-art methods have shown acceptable evaluation on structured and well formatted short texts. Techniques based on deep learning such as utilization of convolutional neural networks BIBREF52, BIBREF49, recurrent neural networks BIBREF50 and long short term memory neural networks BIBREF27, BIBREF24 are aimed to solve NER problem.
The multimodal named entity recognizers can be categorized in two categories based on the tasks at hand, one tries to improve NER task with utilization of visual data BIBREF46, BIBREF8, BIBREF47, and the other tries to give further information about the task at hand such as disambiguation of named entities BIBREF9. We will refer to both of these tasks as MNER. To have a better understanding of MNER, equation DISPLAY_FORM9 formulates the available multimodal data while equations and are true for this task.
$i$ refers to image and the rest goes same as equation DISPLAY_FORM2 for word token sequence.
In BIBREF47 pioneering research was conducted using feature extraction from both image and textual data. The extracted features were fed to decision trees in order to output the named entity classes. Researchers have used multiple datasets ranging from buildings to human face images to train their image feature extractor (object detector and k-means clustering) and a text classifier has been trained on texts acquired from DBPedia.
Researchers in BIBREF46 proposed a MNER model with regards to triplet embedding of words, characters and image. Modality attention applied to this triplet indicates the importance of each embedding and their impact on the output while reducing the impact of irrelevant modals. Modality attention layer is applied to all embedding vectors for each modal, however the investigation of fine-grained attention mechanism is still unclear BIBREF53. The proposed method with Inception feature extraction BIBREF54 and pretrained GloVe word vectors shows good results on the dataset that the authors aggregated from Snapchat. This method shows around 0.5 for precision and F-measure for four entity types (person, location, organization and misc) while for segmentation tasks (distinguishing between a named entity and a non-named entity) it shows around 0.7 for the metrics mentioned.
An adaptive co-attention neural network with four generations are proposed in BIBREF8. The adaptive co-attention part is similar to the multimodal attention proposed in BIBREF46 that enabled the authors to have better results over the dataset they collected from Twitter. In their main proposal, convolutional layers are used for word representation, BiLSTM is utilized to combine word and character embeddings and an attention layer combines the best of the triplet (word, character and image features). VGG-Net16 BIBREF55 is used as a feature extractor for image while the impact of other deep image feature extractors on the proposed solution is unclear, however the results show its superiority over related unimodal methods.
The Proposed Approach
In the present work, we propose a new multimodal deep approach (CWI) that is able to handle noise by co-learning semantics from three modalities, character, word and image. Our method is composed of three parts, convolutional character embedding, joint word embedding (fastText-GloVe) and InceptionV3 image feature extraction BIBREF54, BIBREF23, BIBREF22. Figure FIGREF11 shows CWI architecture in more detail.
Character Feature Extraction shown in the left part of figure FIGREF11 is a composition of six layers. Each sequence of words from a single tweet, $\langle w_1, w_2, \dots , w_n \rangle $ is converted to a sequence of character representation $\langle [c_{(0,0)}, c_{(0,1)}, \dots , c_{(0,k)}], \dots , [c_{(n,0)}, c_{(n,1)}, \dots , c_{(n,k)}] \rangle $ and in order to apply one dimensional convolution, it is required to be in a fixed length. $k$ shows the fixed length of the character sequence representing each word. Rather than using the one-hot representation of characters, a randomly initialized (uniform distribution) embedding layer is used. The first three convolution layers are followed by a one dimensional pooling layer. In each layer, kernel size is increased incrementally from 2 to 4 while the number of kernels are doubled starting from 16. Just like the first part, the second segment of this feature extractor uses three layers but with slight changes. Kernel size is reduced starting from 4 to 2 and the number of kernels is halved starting from 64. In this part, $\otimes $ sign shows concatenation operation. TD + GN + SineRelu note targeted dropout, group normalization and sine-relu BIBREF56, BIBREF57, BIBREF58. These layers prevent the character feature extractor from overfitting. Equation DISPLAY_FORM12 defines SineRelu activation function which is slightly different from Relu.
Instead of using zero in the second part of this equation, $\epsilon (\sin {x}-\cos {x})$ has been used for negative inputs, $\epsilon $ is a hyperparameter that controls the amplitude of $\sin {x}-\cos {x}$ wave. This slight change prevents network from having dead-neurons and unlike Relu, it is differentiable everywhere. On the other hand, it has been proven that using GroupNormalization provides better results than BatchNormalization on various tasks BIBREF57.
However the dropout has major improvement on the neural network as an overfitting prevention technique BIBREF59, in our setup the TargtedDropout shows to provide better results. TargetedDropout randomly drops neurons whose output is over a threshold.
Word Feature Extraction is presented in the middle part of figure FIGREF11. Joint embeddings from pretrained word vectors of GloVe BIBREF22 and fastText BIBREF23 by concatenation operation results in 500 dimensional word embedding. In order to have forward and backward information for each hidden layer, we used a bidirectional long-short term memory BIBREF25, BIBREF26. For the words which were not in the pretrained tokens, we used a random initialization (uniform initialization) between -0.25 and 0.25 at each embedding. The result of this phase is extracted features for each word.
Image Feature Extraction is shown in the right part of figure FIGREF11. For this part, we have used InceptionV3 pretrained on ImageNet BIBREF60. Many models were available as first part of image feature extraction, however the main reason we used InceptionV3 as feature extractor backbone is better performance of it on ImageNet and the results obtained by this particular model were slightly better compared to others.
Instead of using headless version of InceptionV3 for image feature extraction, we have used the full model which outputs the 1000 classes of ImageNet. Each of these classes resembles an item, the set of these items can present a person, location or anything that is identified as a whole. To have better features extracted from the image, we have used an embedding layer. In other words, we looked at the top 5 extracted probabilities as words that is shown in eq. DISPLAY_FORM16; Based on our assumption, these five words present textual keywords related to the image and combination of these words should provide useful information about the objects in visual data. An LSTM unit has been used to output the final image features. These combined embeddings from the most probable items in image are the key to have extra information from a social media post.
where $IW$ is image-word vector, $x$ is output of InceptionV3 and $i$ is the image. $x$ is in domain of [0,1] and $\sum \limits _{\forall k\in x}k=1$ holds true, while $\sum \limits _{\forall k\in IW}k\le 1$.
Multimodal Fusion in our work is presented as concatenation of three feature sets extracted from words, characters and images. Unlike previous methods, our original work does not include an attention layer to remove noisy features. Instead, we stacked LSTM units from word and image feature extractors to have better results. The last layer presented at the top right side of figure FIGREF11 shows this part. In our second proposed method, we have used attention layer applied to this triplet. Our proposed attention mechanism is able to detect on which modality to increase or decrease focus. Equations DISPLAY_FORM17, and show attention mechanism related to second proposed model.
Conditional Random Field is the last layer in our setup which forms the final output. The same implementation explained in eq. DISPLAY_FORM5 is used for our method.
Experimental Evaluation
The present section provides evaluation results of our model against baselines. Before diving into our results, a brief description of dataset and its statistics are provided.
Experimental Evaluation ::: Dataset
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
Experimental Evaluation ::: Experimental Setup
In order to obtain the best results in tab. TABREF20 for our first model (CWI), we have used the following setup in tables TABREF22, TABREF23, TABREF24 and TABREF25. For the second proposed method, the same parameter settings have been used with an additional attention layer. This additional layer has been added after layer 31 in table TABREF25 and before the final CRF layer, indexed as 32. $Adam$ optimizer with $8\times 10^{-5}$ has been used in training phase with 10 epochs.
Experimental Evaluation ::: Evaluation Results
Table TABREF20 presents evaluation results of our proposed models. Compared to other state of the art methods, our first proposed model shows $1\%$ improvement on f1 score. The effect of different word embedding sizes on our proposed method is presented in TABREF26. Sensitivity to TD+SineRelu+GN is presented in tab. TABREF28.
Conclusion
In this article we have proposed a novel named entity recognizer based on multimodal deep learning. In our proposed model, we have used a new architecture in character feature extraction that has helped our model to overcome the issue of noise. Instead of using direct image features from near last layers of image feature extractors such as Inception, we have used the direct output of the last layer. This last layer which is 1000 classes of diverse objects that is result of InceptionV3 trained on ImageNet dataset. We used top 5 classes out of these and converted them to one-hot vectors. The resulting image feature embedding out of these high probability one-hot vectors helped our model to overcome the issue of noise in images posted by social media users. Evaluation results of our proposed model compared to other state of the art methods show its superiority to these methods in overall while in two categories (Person and Miscellaneous) our model outperformed others. | twitter |
e015d033d4ee1c83fe6f192d3310fb820354a553 | e015d033d4ee1c83fe6f192d3310fb820354a553_0 | Q: What datasets did they use?
Text: Introduction
A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.
The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example "Micheal Jordan" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude "Micheal Jordan" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6.
The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently.
An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8.
To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10.
In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media.
The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article.
Related Work
Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.
Named entity is formally defined as a word or phrase that clearly identifies an item from set of other similar items BIBREF11, BIBREF12. Equation DISPLAY_FORM2 expresses a sequence of tokens.
From this equation, the NER task is defined as recognition of tokens that correspond to interesting items. These items from natural language processing perspective are known as named entity categories; BIO2 proposes four major categories, namely, organization, person, location and miscellaneous BIBREF13. From the biomedical domain, gene, protein, drug and disease names are known as named entities BIBREF14, BIBREF15. Output of NER task is formulated in . $I_s\in [1,N]$ and $I_e\in [1,N]$ is the start and end indices of each named entity and $t$ is named entity type BIBREF16.
BIO2 tagging for named entity recognition is defined in equation . Table TABREF3 shows BIO2 tags and their respective meanings; B and I indicate beginning and inside of the entity respectively, while O shows the outside of it. Even though many tagging standards have been proposed for NER task, BIO is the foremost accepted by many real world applications BIBREF17.
A named entity recognizer gets $s$ as input and provides entity tags for each token. This sequential process requires information from the whole sentence rather than only tokens and for that reason, it is also considered to be a sequence tagging problem. Another analogous problem to this issue is part of speech tagging and some methods are capable of doing both. However, in cases where noise is present and input sequence has linguistic typos, many methods fail to overcome the problem. As an example, consider a sequence of tokens where a new token invented by social media users gets trended. This trending new word is misspelled and is used in a sequence along with other tokens in which the whole sequence does not follow known linguistic grammar. For this special case, classical methods and those which use engineered features do not perform well.
Using the sequence $s$ itself or adding more information to it divides two approaches to overcome this problem: unimodal and multimodal.
Although many approaches for NER have been proposed and reviewing them is not in the scope of this article, we focus on foremost analogues classical and deep learning approaches for named entity recognition in two subsections. In subsection SECREF4 unimodal approaches for named entity recognition are presented while in subsection SECREF7 emerging multimodal solutions are described.
Related Work ::: Unimodal Named Entity Recognition
The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.
where $\phi $ is the potential function.
CRF finds the most probable likelihood by modeling the input sequence of tokens $s$ as a normalized product of feature functions. In a simpler explanation, CRF outputs the most probable tags that follow each other. For example it is more likely to have an I-PER, O or any other that that starts with B- after B-PER rather than encountering tags that start with I-.
T-NER is another approach that is specifically aimed to answer NER task in twitter BIBREF19. A set of algorithms in their original work have been published to answer tasks such as POS (part of speech tagging), named entity segmentation and NER. Labeled LDA has been used by the authors in order to outperform baseline in BIBREF20 for NER task. Their approach strongly relies on dictionary, contextual and orthographic features.
Deep learning techniques use distributed word or character representation rather than raw one-hot vectors. Most of this research in NLP field use pretrained word embeddings such as Word2Vec BIBREF21, GloVe BIBREF22 or fastText BIBREF23. These low dimensional real valued dense vectors have proved to provide better representation for words compared to one-hot vector or other space vector models.
The combination of word embedding along with bidirectional long-short term memory (LSTM) neural networks are examined in BIBREF24. The authors also propose to add a CRF layer at the end of their neural network architecture in order to preserve output tag relativity. Utilization of recurrent neural networks (RNN) provides better sequential modeling over data. However, only using sequential information does not result in major improvements because these networks tend to rely on the most recent tokens. Instead of using RNN, authors used LSTM. The long and short term memory capability of these networks helps them to keep in memory what is important and forget what is not necessary to remember. Equation DISPLAY_FORM6 formulates forget-gate of an LSTM neural network, eq. shows input-gate, eq. notes output-gate and eq. presents memory-cell. Finally, eq. shows the hidden part of an LSTM unit BIBREF25, BIBREF26.
for all these equations, $\sigma $ is activation function (sigmoid or tanh are commonly used for LSTM) and $\circ $ is concatenation operation. $W$ and $U$ are weights and $b$ is the bias which should be learned over training process.
LSTM is useful for capturing the relation of tokens in a forward sequential form, however in natural language processing tasks, it is required to know the upcoming token. To overcome this problem, the authors have used a backward and forward LSTM combining output of both.
In a different approach, character embedding followed by a convolution layer is proposed in BIBREF27 for sequence labeling. The utilized architecture is followed by a bidirectional LSTM layer that ends in a CRF layer. Character embedding is a useful technique that the authors tried to use it in a combination with word embedding. Character embedding with the use of convolution as feature extractor from character level, captures relations between characters that form a word and reduces spelling noise. It also helps the model to have an embedding when pretrained word embedding is empty or initialized as random for new words. These words are encountered when they were not present in the training set, thus, in the test phase, model fails to provide a useful embedding.
Related Work ::: Multimodal Named Entity Recognition
Multimodal learning has become an emerging research interest and with the rise of deep learning techniques, it has become more visible in different research areas ranging from medical imaging to image segmentation and natural language processing BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF9, BIBREF37, BIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF45. On the other hand, very little research has been focused on the extraction of named entities with joint image and textual data concerning short and noisy content BIBREF46, BIBREF47, BIBREF9, BIBREF8 while several studies have been explored in textual named entity recognition using neural models BIBREF48, BIBREF49, BIBREF24, BIBREF50, BIBREF27, BIBREF51, BIBREF10, BIBREF52.
State-of-the-art methods have shown acceptable evaluation on structured and well formatted short texts. Techniques based on deep learning such as utilization of convolutional neural networks BIBREF52, BIBREF49, recurrent neural networks BIBREF50 and long short term memory neural networks BIBREF27, BIBREF24 are aimed to solve NER problem.
The multimodal named entity recognizers can be categorized in two categories based on the tasks at hand, one tries to improve NER task with utilization of visual data BIBREF46, BIBREF8, BIBREF47, and the other tries to give further information about the task at hand such as disambiguation of named entities BIBREF9. We will refer to both of these tasks as MNER. To have a better understanding of MNER, equation DISPLAY_FORM9 formulates the available multimodal data while equations and are true for this task.
$i$ refers to image and the rest goes same as equation DISPLAY_FORM2 for word token sequence.
In BIBREF47 pioneering research was conducted using feature extraction from both image and textual data. The extracted features were fed to decision trees in order to output the named entity classes. Researchers have used multiple datasets ranging from buildings to human face images to train their image feature extractor (object detector and k-means clustering) and a text classifier has been trained on texts acquired from DBPedia.
Researchers in BIBREF46 proposed a MNER model with regards to triplet embedding of words, characters and image. Modality attention applied to this triplet indicates the importance of each embedding and their impact on the output while reducing the impact of irrelevant modals. Modality attention layer is applied to all embedding vectors for each modal, however the investigation of fine-grained attention mechanism is still unclear BIBREF53. The proposed method with Inception feature extraction BIBREF54 and pretrained GloVe word vectors shows good results on the dataset that the authors aggregated from Snapchat. This method shows around 0.5 for precision and F-measure for four entity types (person, location, organization and misc) while for segmentation tasks (distinguishing between a named entity and a non-named entity) it shows around 0.7 for the metrics mentioned.
An adaptive co-attention neural network with four generations are proposed in BIBREF8. The adaptive co-attention part is similar to the multimodal attention proposed in BIBREF46 that enabled the authors to have better results over the dataset they collected from Twitter. In their main proposal, convolutional layers are used for word representation, BiLSTM is utilized to combine word and character embeddings and an attention layer combines the best of the triplet (word, character and image features). VGG-Net16 BIBREF55 is used as a feature extractor for image while the impact of other deep image feature extractors on the proposed solution is unclear, however the results show its superiority over related unimodal methods.
The Proposed Approach
In the present work, we propose a new multimodal deep approach (CWI) that is able to handle noise by co-learning semantics from three modalities, character, word and image. Our method is composed of three parts, convolutional character embedding, joint word embedding (fastText-GloVe) and InceptionV3 image feature extraction BIBREF54, BIBREF23, BIBREF22. Figure FIGREF11 shows CWI architecture in more detail.
Character Feature Extraction shown in the left part of figure FIGREF11 is a composition of six layers. Each sequence of words from a single tweet, $\langle w_1, w_2, \dots , w_n \rangle $ is converted to a sequence of character representation $\langle [c_{(0,0)}, c_{(0,1)}, \dots , c_{(0,k)}], \dots , [c_{(n,0)}, c_{(n,1)}, \dots , c_{(n,k)}] \rangle $ and in order to apply one dimensional convolution, it is required to be in a fixed length. $k$ shows the fixed length of the character sequence representing each word. Rather than using the one-hot representation of characters, a randomly initialized (uniform distribution) embedding layer is used. The first three convolution layers are followed by a one dimensional pooling layer. In each layer, kernel size is increased incrementally from 2 to 4 while the number of kernels are doubled starting from 16. Just like the first part, the second segment of this feature extractor uses three layers but with slight changes. Kernel size is reduced starting from 4 to 2 and the number of kernels is halved starting from 64. In this part, $\otimes $ sign shows concatenation operation. TD + GN + SineRelu note targeted dropout, group normalization and sine-relu BIBREF56, BIBREF57, BIBREF58. These layers prevent the character feature extractor from overfitting. Equation DISPLAY_FORM12 defines SineRelu activation function which is slightly different from Relu.
Instead of using zero in the second part of this equation, $\epsilon (\sin {x}-\cos {x})$ has been used for negative inputs, $\epsilon $ is a hyperparameter that controls the amplitude of $\sin {x}-\cos {x}$ wave. This slight change prevents network from having dead-neurons and unlike Relu, it is differentiable everywhere. On the other hand, it has been proven that using GroupNormalization provides better results than BatchNormalization on various tasks BIBREF57.
However the dropout has major improvement on the neural network as an overfitting prevention technique BIBREF59, in our setup the TargtedDropout shows to provide better results. TargetedDropout randomly drops neurons whose output is over a threshold.
Word Feature Extraction is presented in the middle part of figure FIGREF11. Joint embeddings from pretrained word vectors of GloVe BIBREF22 and fastText BIBREF23 by concatenation operation results in 500 dimensional word embedding. In order to have forward and backward information for each hidden layer, we used a bidirectional long-short term memory BIBREF25, BIBREF26. For the words which were not in the pretrained tokens, we used a random initialization (uniform initialization) between -0.25 and 0.25 at each embedding. The result of this phase is extracted features for each word.
Image Feature Extraction is shown in the right part of figure FIGREF11. For this part, we have used InceptionV3 pretrained on ImageNet BIBREF60. Many models were available as first part of image feature extraction, however the main reason we used InceptionV3 as feature extractor backbone is better performance of it on ImageNet and the results obtained by this particular model were slightly better compared to others.
Instead of using headless version of InceptionV3 for image feature extraction, we have used the full model which outputs the 1000 classes of ImageNet. Each of these classes resembles an item, the set of these items can present a person, location or anything that is identified as a whole. To have better features extracted from the image, we have used an embedding layer. In other words, we looked at the top 5 extracted probabilities as words that is shown in eq. DISPLAY_FORM16; Based on our assumption, these five words present textual keywords related to the image and combination of these words should provide useful information about the objects in visual data. An LSTM unit has been used to output the final image features. These combined embeddings from the most probable items in image are the key to have extra information from a social media post.
where $IW$ is image-word vector, $x$ is output of InceptionV3 and $i$ is the image. $x$ is in domain of [0,1] and $\sum \limits _{\forall k\in x}k=1$ holds true, while $\sum \limits _{\forall k\in IW}k\le 1$.
Multimodal Fusion in our work is presented as concatenation of three feature sets extracted from words, characters and images. Unlike previous methods, our original work does not include an attention layer to remove noisy features. Instead, we stacked LSTM units from word and image feature extractors to have better results. The last layer presented at the top right side of figure FIGREF11 shows this part. In our second proposed method, we have used attention layer applied to this triplet. Our proposed attention mechanism is able to detect on which modality to increase or decrease focus. Equations DISPLAY_FORM17, and show attention mechanism related to second proposed model.
Conditional Random Field is the last layer in our setup which forms the final output. The same implementation explained in eq. DISPLAY_FORM5 is used for our method.
Experimental Evaluation
The present section provides evaluation results of our model against baselines. Before diving into our results, a brief description of dataset and its statistics are provided.
Experimental Evaluation ::: Dataset
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
Experimental Evaluation ::: Experimental Setup
In order to obtain the best results in tab. TABREF20 for our first model (CWI), we have used the following setup in tables TABREF22, TABREF23, TABREF24 and TABREF25. For the second proposed method, the same parameter settings have been used with an additional attention layer. This additional layer has been added after layer 31 in table TABREF25 and before the final CRF layer, indexed as 32. $Adam$ optimizer with $8\times 10^{-5}$ has been used in training phase with 10 epochs.
Experimental Evaluation ::: Evaluation Results
Table TABREF20 presents evaluation results of our proposed models. Compared to other state of the art methods, our first proposed model shows $1\%$ improvement on f1 score. The effect of different word embedding sizes on our proposed method is presented in TABREF26. Sensitivity to TD+SineRelu+GN is presented in tab. TABREF28.
Conclusion
In this article we have proposed a novel named entity recognizer based on multimodal deep learning. In our proposed model, we have used a new architecture in character feature extraction that has helped our model to overcome the issue of noise. Instead of using direct image features from near last layers of image feature extractors such as Inception, we have used the direct output of the last layer. This last layer which is 1000 classes of diverse objects that is result of InceptionV3 trained on ImageNet dataset. We used top 5 classes out of these and converted them to one-hot vectors. The resulting image feature embedding out of these high probability one-hot vectors helped our model to overcome the issue of noise in images posted by social media users. Evaluation results of our proposed model compared to other state of the art methods show its superiority to these methods in overall while in two categories (Person and Miscellaneous) our model outperformed others. | BIBREF8 a refined collection of tweets gathered from twitter |
8a871b136ccef78391922377f89491c923a77730 | 8a871b136ccef78391922377f89491c923a77730_0 | Q: What are the baseline state of the art models?
Text: Introduction
A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.
The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example "Micheal Jordan" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude "Micheal Jordan" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6.
The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently.
An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8.
To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10.
In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media.
The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article.
Related Work
Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.
Named entity is formally defined as a word or phrase that clearly identifies an item from set of other similar items BIBREF11, BIBREF12. Equation DISPLAY_FORM2 expresses a sequence of tokens.
From this equation, the NER task is defined as recognition of tokens that correspond to interesting items. These items from natural language processing perspective are known as named entity categories; BIO2 proposes four major categories, namely, organization, person, location and miscellaneous BIBREF13. From the biomedical domain, gene, protein, drug and disease names are known as named entities BIBREF14, BIBREF15. Output of NER task is formulated in . $I_s\in [1,N]$ and $I_e\in [1,N]$ is the start and end indices of each named entity and $t$ is named entity type BIBREF16.
BIO2 tagging for named entity recognition is defined in equation . Table TABREF3 shows BIO2 tags and their respective meanings; B and I indicate beginning and inside of the entity respectively, while O shows the outside of it. Even though many tagging standards have been proposed for NER task, BIO is the foremost accepted by many real world applications BIBREF17.
A named entity recognizer gets $s$ as input and provides entity tags for each token. This sequential process requires information from the whole sentence rather than only tokens and for that reason, it is also considered to be a sequence tagging problem. Another analogous problem to this issue is part of speech tagging and some methods are capable of doing both. However, in cases where noise is present and input sequence has linguistic typos, many methods fail to overcome the problem. As an example, consider a sequence of tokens where a new token invented by social media users gets trended. This trending new word is misspelled and is used in a sequence along with other tokens in which the whole sequence does not follow known linguistic grammar. For this special case, classical methods and those which use engineered features do not perform well.
Using the sequence $s$ itself or adding more information to it divides two approaches to overcome this problem: unimodal and multimodal.
Although many approaches for NER have been proposed and reviewing them is not in the scope of this article, we focus on foremost analogues classical and deep learning approaches for named entity recognition in two subsections. In subsection SECREF4 unimodal approaches for named entity recognition are presented while in subsection SECREF7 emerging multimodal solutions are described.
Related Work ::: Unimodal Named Entity Recognition
The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.
where $\phi $ is the potential function.
CRF finds the most probable likelihood by modeling the input sequence of tokens $s$ as a normalized product of feature functions. In a simpler explanation, CRF outputs the most probable tags that follow each other. For example it is more likely to have an I-PER, O or any other that that starts with B- after B-PER rather than encountering tags that start with I-.
T-NER is another approach that is specifically aimed to answer NER task in twitter BIBREF19. A set of algorithms in their original work have been published to answer tasks such as POS (part of speech tagging), named entity segmentation and NER. Labeled LDA has been used by the authors in order to outperform baseline in BIBREF20 for NER task. Their approach strongly relies on dictionary, contextual and orthographic features.
Deep learning techniques use distributed word or character representation rather than raw one-hot vectors. Most of this research in NLP field use pretrained word embeddings such as Word2Vec BIBREF21, GloVe BIBREF22 or fastText BIBREF23. These low dimensional real valued dense vectors have proved to provide better representation for words compared to one-hot vector or other space vector models.
The combination of word embedding along with bidirectional long-short term memory (LSTM) neural networks are examined in BIBREF24. The authors also propose to add a CRF layer at the end of their neural network architecture in order to preserve output tag relativity. Utilization of recurrent neural networks (RNN) provides better sequential modeling over data. However, only using sequential information does not result in major improvements because these networks tend to rely on the most recent tokens. Instead of using RNN, authors used LSTM. The long and short term memory capability of these networks helps them to keep in memory what is important and forget what is not necessary to remember. Equation DISPLAY_FORM6 formulates forget-gate of an LSTM neural network, eq. shows input-gate, eq. notes output-gate and eq. presents memory-cell. Finally, eq. shows the hidden part of an LSTM unit BIBREF25, BIBREF26.
for all these equations, $\sigma $ is activation function (sigmoid or tanh are commonly used for LSTM) and $\circ $ is concatenation operation. $W$ and $U$ are weights and $b$ is the bias which should be learned over training process.
LSTM is useful for capturing the relation of tokens in a forward sequential form, however in natural language processing tasks, it is required to know the upcoming token. To overcome this problem, the authors have used a backward and forward LSTM combining output of both.
In a different approach, character embedding followed by a convolution layer is proposed in BIBREF27 for sequence labeling. The utilized architecture is followed by a bidirectional LSTM layer that ends in a CRF layer. Character embedding is a useful technique that the authors tried to use it in a combination with word embedding. Character embedding with the use of convolution as feature extractor from character level, captures relations between characters that form a word and reduces spelling noise. It also helps the model to have an embedding when pretrained word embedding is empty or initialized as random for new words. These words are encountered when they were not present in the training set, thus, in the test phase, model fails to provide a useful embedding.
Related Work ::: Multimodal Named Entity Recognition
Multimodal learning has become an emerging research interest and with the rise of deep learning techniques, it has become more visible in different research areas ranging from medical imaging to image segmentation and natural language processing BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF9, BIBREF37, BIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF45. On the other hand, very little research has been focused on the extraction of named entities with joint image and textual data concerning short and noisy content BIBREF46, BIBREF47, BIBREF9, BIBREF8 while several studies have been explored in textual named entity recognition using neural models BIBREF48, BIBREF49, BIBREF24, BIBREF50, BIBREF27, BIBREF51, BIBREF10, BIBREF52.
State-of-the-art methods have shown acceptable evaluation on structured and well formatted short texts. Techniques based on deep learning such as utilization of convolutional neural networks BIBREF52, BIBREF49, recurrent neural networks BIBREF50 and long short term memory neural networks BIBREF27, BIBREF24 are aimed to solve NER problem.
The multimodal named entity recognizers can be categorized in two categories based on the tasks at hand, one tries to improve NER task with utilization of visual data BIBREF46, BIBREF8, BIBREF47, and the other tries to give further information about the task at hand such as disambiguation of named entities BIBREF9. We will refer to both of these tasks as MNER. To have a better understanding of MNER, equation DISPLAY_FORM9 formulates the available multimodal data while equations and are true for this task.
$i$ refers to image and the rest goes same as equation DISPLAY_FORM2 for word token sequence.
In BIBREF47 pioneering research was conducted using feature extraction from both image and textual data. The extracted features were fed to decision trees in order to output the named entity classes. Researchers have used multiple datasets ranging from buildings to human face images to train their image feature extractor (object detector and k-means clustering) and a text classifier has been trained on texts acquired from DBPedia.
Researchers in BIBREF46 proposed a MNER model with regards to triplet embedding of words, characters and image. Modality attention applied to this triplet indicates the importance of each embedding and their impact on the output while reducing the impact of irrelevant modals. Modality attention layer is applied to all embedding vectors for each modal, however the investigation of fine-grained attention mechanism is still unclear BIBREF53. The proposed method with Inception feature extraction BIBREF54 and pretrained GloVe word vectors shows good results on the dataset that the authors aggregated from Snapchat. This method shows around 0.5 for precision and F-measure for four entity types (person, location, organization and misc) while for segmentation tasks (distinguishing between a named entity and a non-named entity) it shows around 0.7 for the metrics mentioned.
An adaptive co-attention neural network with four generations are proposed in BIBREF8. The adaptive co-attention part is similar to the multimodal attention proposed in BIBREF46 that enabled the authors to have better results over the dataset they collected from Twitter. In their main proposal, convolutional layers are used for word representation, BiLSTM is utilized to combine word and character embeddings and an attention layer combines the best of the triplet (word, character and image features). VGG-Net16 BIBREF55 is used as a feature extractor for image while the impact of other deep image feature extractors on the proposed solution is unclear, however the results show its superiority over related unimodal methods.
The Proposed Approach
In the present work, we propose a new multimodal deep approach (CWI) that is able to handle noise by co-learning semantics from three modalities, character, word and image. Our method is composed of three parts, convolutional character embedding, joint word embedding (fastText-GloVe) and InceptionV3 image feature extraction BIBREF54, BIBREF23, BIBREF22. Figure FIGREF11 shows CWI architecture in more detail.
Character Feature Extraction shown in the left part of figure FIGREF11 is a composition of six layers. Each sequence of words from a single tweet, $\langle w_1, w_2, \dots , w_n \rangle $ is converted to a sequence of character representation $\langle [c_{(0,0)}, c_{(0,1)}, \dots , c_{(0,k)}], \dots , [c_{(n,0)}, c_{(n,1)}, \dots , c_{(n,k)}] \rangle $ and in order to apply one dimensional convolution, it is required to be in a fixed length. $k$ shows the fixed length of the character sequence representing each word. Rather than using the one-hot representation of characters, a randomly initialized (uniform distribution) embedding layer is used. The first three convolution layers are followed by a one dimensional pooling layer. In each layer, kernel size is increased incrementally from 2 to 4 while the number of kernels are doubled starting from 16. Just like the first part, the second segment of this feature extractor uses three layers but with slight changes. Kernel size is reduced starting from 4 to 2 and the number of kernels is halved starting from 64. In this part, $\otimes $ sign shows concatenation operation. TD + GN + SineRelu note targeted dropout, group normalization and sine-relu BIBREF56, BIBREF57, BIBREF58. These layers prevent the character feature extractor from overfitting. Equation DISPLAY_FORM12 defines SineRelu activation function which is slightly different from Relu.
Instead of using zero in the second part of this equation, $\epsilon (\sin {x}-\cos {x})$ has been used for negative inputs, $\epsilon $ is a hyperparameter that controls the amplitude of $\sin {x}-\cos {x}$ wave. This slight change prevents network from having dead-neurons and unlike Relu, it is differentiable everywhere. On the other hand, it has been proven that using GroupNormalization provides better results than BatchNormalization on various tasks BIBREF57.
However the dropout has major improvement on the neural network as an overfitting prevention technique BIBREF59, in our setup the TargtedDropout shows to provide better results. TargetedDropout randomly drops neurons whose output is over a threshold.
Word Feature Extraction is presented in the middle part of figure FIGREF11. Joint embeddings from pretrained word vectors of GloVe BIBREF22 and fastText BIBREF23 by concatenation operation results in 500 dimensional word embedding. In order to have forward and backward information for each hidden layer, we used a bidirectional long-short term memory BIBREF25, BIBREF26. For the words which were not in the pretrained tokens, we used a random initialization (uniform initialization) between -0.25 and 0.25 at each embedding. The result of this phase is extracted features for each word.
Image Feature Extraction is shown in the right part of figure FIGREF11. For this part, we have used InceptionV3 pretrained on ImageNet BIBREF60. Many models were available as first part of image feature extraction, however the main reason we used InceptionV3 as feature extractor backbone is better performance of it on ImageNet and the results obtained by this particular model were slightly better compared to others.
Instead of using headless version of InceptionV3 for image feature extraction, we have used the full model which outputs the 1000 classes of ImageNet. Each of these classes resembles an item, the set of these items can present a person, location or anything that is identified as a whole. To have better features extracted from the image, we have used an embedding layer. In other words, we looked at the top 5 extracted probabilities as words that is shown in eq. DISPLAY_FORM16; Based on our assumption, these five words present textual keywords related to the image and combination of these words should provide useful information about the objects in visual data. An LSTM unit has been used to output the final image features. These combined embeddings from the most probable items in image are the key to have extra information from a social media post.
where $IW$ is image-word vector, $x$ is output of InceptionV3 and $i$ is the image. $x$ is in domain of [0,1] and $\sum \limits _{\forall k\in x}k=1$ holds true, while $\sum \limits _{\forall k\in IW}k\le 1$.
Multimodal Fusion in our work is presented as concatenation of three feature sets extracted from words, characters and images. Unlike previous methods, our original work does not include an attention layer to remove noisy features. Instead, we stacked LSTM units from word and image feature extractors to have better results. The last layer presented at the top right side of figure FIGREF11 shows this part. In our second proposed method, we have used attention layer applied to this triplet. Our proposed attention mechanism is able to detect on which modality to increase or decrease focus. Equations DISPLAY_FORM17, and show attention mechanism related to second proposed model.
Conditional Random Field is the last layer in our setup which forms the final output. The same implementation explained in eq. DISPLAY_FORM5 is used for our method.
Experimental Evaluation
The present section provides evaluation results of our model against baselines. Before diving into our results, a brief description of dataset and its statistics are provided.
Experimental Evaluation ::: Dataset
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
Experimental Evaluation ::: Experimental Setup
In order to obtain the best results in tab. TABREF20 for our first model (CWI), we have used the following setup in tables TABREF22, TABREF23, TABREF24 and TABREF25. For the second proposed method, the same parameter settings have been used with an additional attention layer. This additional layer has been added after layer 31 in table TABREF25 and before the final CRF layer, indexed as 32. $Adam$ optimizer with $8\times 10^{-5}$ has been used in training phase with 10 epochs.
Experimental Evaluation ::: Evaluation Results
Table TABREF20 presents evaluation results of our proposed models. Compared to other state of the art methods, our first proposed model shows $1\%$ improvement on f1 score. The effect of different word embedding sizes on our proposed method is presented in TABREF26. Sensitivity to TD+SineRelu+GN is presented in tab. TABREF28.
Conclusion
In this article we have proposed a novel named entity recognizer based on multimodal deep learning. In our proposed model, we have used a new architecture in character feature extraction that has helped our model to overcome the issue of noise. Instead of using direct image features from near last layers of image feature extractors such as Inception, we have used the direct output of the last layer. This last layer which is 1000 classes of diverse objects that is result of InceptionV3 trained on ImageNet dataset. We used top 5 classes out of these and converted them to one-hot vectors. The resulting image feature embedding out of these high probability one-hot vectors helped our model to overcome the issue of noise in images posted by social media users. Evaluation results of our proposed model compared to other state of the art methods show its superiority to these methods in overall while in two categories (Person and Miscellaneous) our model outperformed others. | Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention |
acd05f31e25856b9986daa1651843b8dc92c2d99 | acd05f31e25856b9986daa1651843b8dc92c2d99_0 | Q: What is the size of the dataset?
Text: Introduction
Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions:
1. Safecity is the largest publicly-available online forum for reporting sexual harassment BIBREF6. We annotated about 10,000 personal stories from Safecity with the key elements, including information of harasser (i.e. the words describing the harasser), time, location and the trigger words (i.e. the phrases indicate the harassment that occurred). The key elements are important for studying the patterns of harassment and victimology BIBREF5, BIBREF7. Furthermore, we also associated each story with five labels that characterize the story in multiple dimensions (i.e. age of harasser, single/multiple harasser(s), type of harasser, type of location and time of day). The annotation data are available online.
2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6.
3. We uncovered significant patterns from the categorized sexual harassment stories.
Related Work
Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14.
There are a very limited number of studies on sexual harassment stories shared online. Karlekar and Bansal karlekar2018safecity were the first group to our knowledge that applied NLP to analyze large amount ( $\sim $10,000) of sexual harassment stories. Although their CNN-RNN classification models demonstrated high performance on classifying the forms of harassment, only the top 3 majority forms were studied. In order to study the details of the sexual harassment, the trigger words are crucial. Additionally, research indicated that both situational factors and person (or individual difference) factors contribute to sexual harassment BIBREF15. Therefore, the information about perpetrators needs to be extracted as well as the location and time of events. Karlekar and Bansal karlekar2018safecity applied several visualization techniques in order to capture such information, but it was not obtained explicitly. Our preliminary research demonstrated automatic extraction of key element and story classification in separate steps BIBREF16. In this paper, we proposed joint learning NLP models to directly extract the information of the harasser, time, location and trigger word as key elements and categorize the harassment stories in five dimensions as well. Our approach can provide an avenue to automatically uncover nuanced circumstances informing sexual harassment from online stories.
Data Collection and Annotation
We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.
Age of Harasser: Individual difference such as age can affect harassment behaviors. Therefore, we studied the harassers in two age groups, young and adult. Young people in this paper refer to people in the early 20s or younger.
Single/Multiple Harasser(s): Harassers may behave differently in groups than they do alone.
Type of Harasser: Person factors in harassment include the common relationships or titles of the harassers. Additionally, the reactions of people who experience harassment may vary with the harassers' relations to themselves BIBREF5. We defined 10 groups with respects to the harassers' relationships or titles. We put conductors and drivers in one group, as they both work on the public transportation. Police and guards are put in the same category, because they are employed to provide security. Manager, supervisors, and colleagues are in the work-related group. The others are described by their names.
Type of Location: It will be helpful to reveal the places where harassment most frequently occurs BIBREF7, BIBREF6. We defined 14 types of locations. “Station/stop” refers to places where people wait for public transportation or buy tickets. Private places include survivors' or harassers' home, places of parties and etc. The others are described by their names.
Time of Day: The time of an incident may be reported as “in evening” or at a specific time, e.g. “10 pm”. We considered that 5 am to 6 pm as day time, and the rest of the day as the night.
Because many of the stories collected are short, many do not contain all of the key elements. For example, “A man came near to her tried to be physical with her .”. The time and location are unknown from the story. In addition, the harassers were strangers to those they harassed in many cases. For instance, “My friend was standing in the queue to pay bill and was ogled by a group of boys.”, we can only learn that there were multiple young harassers, but the type of harasser is unclear. The missing information is hence marked as “unspecified”. It is different from the label “other", which means the information is provided but the number of them is too small to be represented by a group, for example, a “trader”.
All the data were labeled by two annotators with training. Inter-rater agreement was measured by Cohen's kappa coefficient, ranging from 0.71 to 0.91 for classifications in different dimensions and 0.75 for key element extraction (details can refer to Table 1 in supplementary file). The disagreements were reviewed by a third annotator and a final decision was made.
Proposed Models
The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively.
Proposed Models ::: CNN Based Joint Learning Models
In Figure FIGREF6, the first proposed structure consists of two layers of CNN modules.
J-CNN: To predict the type of key element, it is essential for the CNN model to capture the context information around each word. Therefore, the word along with its surrounding context of a fixed window size was converted into a context sequence. Assuming a window size of $2l + 1$ around the target word $w_0$, the context sequence is $[(w_{-l}, w_{-l+1},...w_0, ...w_{l-1},w_l)]$, where $w_i (i \in [-l,l])$ stands for the $ith$ word from $w_0$.
Because the context of the two consecutive words in the original text are only off by one position, it will be difficult for the CNN model to detect the difference. Therefore, the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions BIBREF17. That position was embedded as a $p$ dimensional vector, where $p$ is a hyperparameter. The position embeddings were learned at the training stage. Each word in the original text was then converted into a sequence of the concatenation of word and position embeddings. Such sequence was fed into the CNN modules in the first layer of the model, which output the high level word representation ($h_i, i\in [0,n-1]$, where n is the number of input words). The high level word representation was then passed into a fully connected layer, to predict the key element type for the word. The CNN modules in this layer share the same parameters.
We input the sequence of high level word representations ($h_i$) from the first layer into another layer of multiple CNN modules to categorize the harassment incident in each dimension (Figure FIGREF6). Inside each CNN module, the sequence of word representations were first passed through a convolution layer to generate a sequence of new feature vectors ($C =[c_0,c_1,...c_q]$). This vector sequence ($C$) was then fed into a max pooling layer. This is followed by a fully connected layer. Modules in this layer do not share parameters across classification tasks.
J-ACNN: We also experimented with attentive pooling, by replacing the max pooling layer. The attention layer aggregates the sequence of feature vectors ($C$) by measuring the contribution of each vector to form the high level representation of the harassment story. Specifically,
That is, a fully connected layer with non-linear activation was applied to each vector $c_{i}$ to get its hidden representation $u_{i}$. The similarity of $u_{i}$ with a context vector $u_{w}$ was measured and get normalized through a softmax function, as the importance weight $\alpha _{i}$. The final representation of the incident story $v$ was an aggregation of all the feature vectors weighted by $\alpha _{i}$. $W_{\omega }$, $b_{\omega }$ and $u_{w}$ were learned during training.
The final representation ($v$) was passed into one fully connected layer for each classification task. We also applied different attention layers for different classifications, because the classification modules categorize the incident in different dimensions, their focuses vary. For example, to classify “time of day”, one needs to focus on the time phrases, but pays more attention to harassers when classifying “age of harasser”.
J-SACNN: To further exploit the information of the key elements, we applied supervision BIBREF18 to the attentive pooling layer, with the annotated key element types of the words as ground truth. For instance, in classification of “age of harasser”, the ground truth attention labels for words with key element types of “harasser” are 1 and others are 0. To conform to the CNN structure, we applied convolution to the sequence of ground truth attention labels, with the same window size ($w$) that was applied to the word sequence (Eq. DISPLAY_FORM11).
where $\circ $ is element-wise multiplication, $e_t$ is the ground truth attention label, and the $W \in R^{w\times 1}$ is a constant matrix with all elements equal to 1. $\alpha ^{*}$ was normalized through a softmax function and used as ground truth weight values of the vector sequence ($C$) output from the convolution layer. The loss was calculated between learned attention $\alpha $ and $\alpha ^{*}$ (Eq. DISPLAY_FORM12), and added to the total loss.
Proposed Models ::: BiLSTM Based Joint Learning Models
J-BiLSTM: The model input the sequence of word embeddings to the BiLSTM layer. To extract key elements, the hidden states from the forward and backward LSTM cells were concatenated and used as word representations to predict the key element types.
To classify the harassment story in different dimensions, concatenation of the forward and backward final states of BiLSTM layer was used as document level representation of the story.
J-ABiLSTM: We also experimented on BiLSTM model with the attention layer to aggregate the outputs from BiLSTM layer (Figure FIGREF7). The aggregation of the outputs was used as document level representation.
J-SABiLSTM: Similarly, we experimented with the supervised attention.
In all the models, softmax function was used to calculate the probabilities at the prediction step, and the cross entropy losses from extraction and classification tasks were added together. In case of supervised attention, the loss defined in Eq. DISPLAY_FORM12 was added to the total loss as well. We applied the stochastic gradient descent algorithm with mini-batches and the AdaDelta update Rule (rho=0.95 and epsilon=1e-6) BIBREF19, BIBREF20. The gradients were computed using back-propagation. During training, we also optimized the word and position embeddings.
Experiments and Results ::: Experimental Settings
Data Splits: We used the same splits of train, develop, and test sets used by Karlekar and Bansal BIBREF6, with 7201, 990 and 1701 stories, respectively. In this study, we only considered single label classifications.
Baseline Models: CNN and BiLSTM models that perform classification and extraction separately were used as baseline models. In classification, we also experimented with BiLSTM with the attention layer. To demonstrate that the improvement came from joint learning structure rather the two layer structure in J-CNN, we investigated the same model structure without training on key element extraction. We use J-CNN* to denote it.
Preprocess: All the texts were converted to lowercase and preprocessed by removing non-alphanumeric characters, excluding “. ! ? ” . The word embeddings were pre-trained using fastText BIBREF21 with dimension equaling 100.
Hyperparameters: For the CNN model, the filter size was chosen to be (1,2,3,4), with 50 filters per filter size. Batch size was set to 50 and the dropout rate was 0.5. The BiLSTM model comprises two layers of one directional LSTM. Every LSTM cell has 50 hidden units. The dropout rate was 0.25. Attention size was 50.
Experiments and Results ::: Results and Discussions
We compared joint learning models with the single task models. Results are averages from five experiments. Although not much improvement was achieved in key element extraction (Figure TABREF16), classification performance improved significantly with joint learning schemes (Table TABREF17). Significance t-test results are shown in Table 2 in the supplementary file.
BiLSTM Based Models: Joint learning BiLSTM with attention outperformed single task BiLSTM models. One reason is that it directed the attention of the model to the correct part of the text. For example,
S1: “ foogreen!1.7003483371809125 foowhen foogreen!3.4324652515351772 fooi foogreen!10.76661329716444 foowas foogreen!20.388443022966385 fooreturning foogreen!9.704475291073322 foomy foogreen!6.052316632121801 foohome foogreen!2.477810252457857 fooafter foogreen!3.5612427163869143 foofinishing foogreen!4.7736018896102905 foomy foogreen!4.634172189980745 fooclass foogreen!0.6899426807649434 foo. foogreen!0.35572052001953125 fooi foogreen!0.3427551419008523 foowas foogreen!0.293194578262046 fooin foogreen!0.2028885210165754 fooqueue foogreen!0.10553237370913848 footo foogreen!0.19472737039905041 fooget foogreen!0.44946340494789183 fooon foogreen!0.5511227645911276 foothe foogreen!2.056689700111747 foomicro foogreen!2.597035141661763 foobus foogreen!2.5683704297989607 fooand foogreen!4.6382867731153965 foothere foogreen!9.827975183725357 foowas foogreen!21.346069872379303 fooa foogreen!22.295180708169937 foogirl foogreen!11.672522872686386 fooopposite foogreen!8.892465382814407 footo foogreen!18.20233091711998 foome foogreen!13.192926533520222 foojust foogreen!26.24184638261795 foothen foogreen!40.2555949985981 fooa foogreen!30.108729377388954 fooyoung foogreen!115.02625793218613 fooman foogreen!93.40204298496246 footried foogreen!58.68498980998993 footo foogreen!144.01434361934662 footouch foogreen!108.82275551557541 fooher foogreen!80.9452086687088 fooon foogreen!47.26015031337738 foothe foogreen!47.71501570940018 foobreast foogreen!19.392695277929306 foo.”
S2: “ foogreen!0.2212507533840835 foowhen foogreen!0.26129744946956635 fooi foogreen!0.3014186804648489 foowas foogreen!0.314583390718326 fooreturning foogreen!0.23829322890378535 foomy foogreen!0.018542312318459153 foohome foogreen!0.06052045864635147 fooafter foogreen!0.3865368489641696 foofinishing foogreen!0.5127551266923547 foomy foogreen!0.569560332223773 fooclass foogreen!0.037081812479300424 foo. foogreen!0.061129467212595046 fooi foogreen!0.12043083552271128 foowas foogreen!0.2053432835964486 fooin foogreen!0.038308095099637285 fooqueue foogreen!0.05270353358355351 footo foogreen!0.07939991337480024 fooget foogreen!0.14962266141083091 fooon foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa foogreen!788.9420390129089 fooyoung foogreen!199.1765946149826 fooman foogreen!0.39259070763364434 footried foogreen!0.27069455245509744 footo foogreen!0.5092779756523669 footouch foogreen!0.7033208385109901 fooher foogreen!0.6793316570110619 fooon foogreen!0.5892394692637026 foothe foogreen!0.4084075626451522 foobreast foogreen!0.14951340563129634 foo.”
S3: “ foogreen!0.23944019631017 foowhen foogreen!0.16698541003279388 fooi foogreen!0.3381385176908225 foowas foogreen!0.21315943740773946 fooreturning foogreen!0.3222442464902997 foomy foogreen!0.8483575657010078 foohome foogreen!0.10339960863348097 fooafter foogreen!0.2440519310766831 foofinishing foogreen!0.39699181797914207 foomy foogreen!1.2218113988637924 fooclass foogreen!0.1232976937899366 foo. foogreen!0.10928708070423454 fooi foogreen!0.2562549489084631 foowas foogreen!0.8099888218566775 fooin foogreen!2.9650430660694838 fooqueue foogreen!0.507337914314121 footo foogreen!0.727736041881144 fooget foogreen!0.7367140497080982 fooon foogreen!0.711284636054188 foothe foogreen!194.2763775587082 foomicro foogreen!786.8869304656982 foobus foogreen!0.4422159108798951 fooand foogreen!0.43104542419314384 foothere foogreen!0.4694198723882437 foowas foogreen!0.5085613229312003 fooa foogreen!0.4430979897733778 foogirl foogreen!0.36199347232468426 fooopposite foogreen!0.31067250529304147 footo foogreen!0.2927705936599523 foome foogreen!0.24646619567647576 foojust foogreen!0.23911069729365408 foothen foogreen!0.11775700113503262 fooa foogreen!0.002219072712250636 fooyoung foogreen!0.0019248132048232947 fooman foogreen!0.32698659924790263 footried foogreen!0.3118939639534801 footo foogreen!0.5727249081246555 footouch foogreen!0.5670131067745388 fooher foogreen!0.7104063988663256 fooon foogreen!0.6698771030642092 foothe foogreen!0.4756081907544285 foobreast foogreen!0.26600153069011867 foo.”
In S1, the regular BiLSTM with attention model for classification on “age of harasser” put some attention on phrases other than the harasser, and hence aggregated noise. This could explain why the regular BiLSTM model got lower performance than the CNN model. However, when training with key element extractions, it put almost all attention on the harasser “young man” (S2), which helped the model make correct prediction of “young harasser”. When predicting the “type of location” (S3), the joint learning model directed its attention to “micro bus”.
CNN Based Models: Since CNN is efficient for capturing the most useful information BIBREF22, it is quite suitable for the classification tasks in this study. It achieved better performance than the BiLSTM model. The joint learning method boosted the performance even higher. This is because the classifications are related to the extracted key elements, and the word representation learned by the first layer of CNNs (Figure FIGREF6) is more informative than word embedding. By plotting of t-SNEs BIBREF23 of the two kinds of word vectors, we can see the word representations in the joint learning model made the words more separable (Figure 1 in supplementary file). In addition, no improvement was found with the J-CNN* model, which demonstrated the joint learning with extraction is essential for the improvement.
With supervised attentive pooling, the model can get additional knowledge from key element labels. It helped the model in cases when certain location phrases were mentioned but the incidents did not happen at those locations. For instance, “I was followed on my way home .”, max pooling will very likely to predict it as “private places”. But, it is actually unknown. In other cases, with supervised attentive pooling, the model can distinguish “metro” and “metro station”, which are “transportation” and “stop/station” respectively. Therefore, the model further improved on classifications on “type of location” with supervised attention in terms of macro F1. For some tasks, like “time of day”, there are fewer cases with such disambiguation and hence max pooling worked well. Supervised attention improved macro F1 in location and harasser classifications, because it made more correct predictions in cases that mentioned location and harasser. But the majority did not mention them. Therefore, the accuracy of J-SACNN did not increase, compared with the other models.
Classification on Harassment Forms: In Table TABREF18, we also compared the performance of binary classifications on harassment forms with the results reported by Karlekar and Bansal karlekar2018safecity. Joint learning models achieved higher accuracy. In some harassment stories, the whole text or a span of the text consists of trigger words of multiple forms, such as “stare, whistles, start to sing, commenting”. The supervised attention mechanism will force the model to look at all such words rather than just the one related to the harassment form for classification and hence it can introduce noise. This can explain why J-SACNN got lower accuracy in two of the harassment form classifications, compared to J-ACNN. In addition, J-CNN model did best in “ogling” classification.
Patterns of Sexual Harassment
We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.
Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.
In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street.
These results can provide valuable information for all members of the public. Sharing stories of harassment has been found by researchers to shift people’s cognitive and emotional orientation towards their traumatic experiences BIBREF24. Greater awareness of patterns and scale of harassment experiences promises to ensure those who have been subjected to this violence that they are not alone, empowering others to report incidents, and ensuring them that efforts are being made to prevent others from experiencing the same harassment. These results also provide various authorities tools to identify potential harassment patterns and to make more effective interventions to prevent further harassment incidents. For instance, the authorities can increase targeted educational efforts at youth and adults, and be guided in utilizing limited resources the most effectively to offer more safety measures, including policing and community-based responses. For example, focusing efforts on highly populated public transportation during the nighttime, when harassment is found to be most likely to occur.
Conclusions
We provided a large number of annotated personal stories of sexual harassment. Analyzing and identifying the social patterns of harassment behavior is essential to changing these patterns and social tolerance for them. We demonstrated the joint learning NLP models with strong performances to automatically extract key elements and categorize the stories. Potentiality, the approaches and models proposed in this study can be applied to sexual harassment stories from other sources, which can process and summarize the harassment stories and help those who have experienced harassment and authorities to work faster, such as by automatically filing reports BIBREF6. Furthermore, we discovered meaningful patterns in the situations where harassment commonly occurred. The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities. Our work can increase the understanding of sexual harassment in society, ease the processing of such incidents by advocates and officials, and most importantly, raise awareness of this urgent problem.
Acknowledgments
We thank the Safecity for granting the permission of using the data. | 9,892 stories of sexual harassment incidents |
8c78b21ec966a5e8405e8b9d3d6e7099e95ea5fb | 8c78b21ec966a5e8405e8b9d3d6e7099e95ea5fb_0 | Q: What model did they use?
Text: Introduction
Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions:
1. Safecity is the largest publicly-available online forum for reporting sexual harassment BIBREF6. We annotated about 10,000 personal stories from Safecity with the key elements, including information of harasser (i.e. the words describing the harasser), time, location and the trigger words (i.e. the phrases indicate the harassment that occurred). The key elements are important for studying the patterns of harassment and victimology BIBREF5, BIBREF7. Furthermore, we also associated each story with five labels that characterize the story in multiple dimensions (i.e. age of harasser, single/multiple harasser(s), type of harasser, type of location and time of day). The annotation data are available online.
2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6.
3. We uncovered significant patterns from the categorized sexual harassment stories.
Related Work
Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14.
There are a very limited number of studies on sexual harassment stories shared online. Karlekar and Bansal karlekar2018safecity were the first group to our knowledge that applied NLP to analyze large amount ( $\sim $10,000) of sexual harassment stories. Although their CNN-RNN classification models demonstrated high performance on classifying the forms of harassment, only the top 3 majority forms were studied. In order to study the details of the sexual harassment, the trigger words are crucial. Additionally, research indicated that both situational factors and person (or individual difference) factors contribute to sexual harassment BIBREF15. Therefore, the information about perpetrators needs to be extracted as well as the location and time of events. Karlekar and Bansal karlekar2018safecity applied several visualization techniques in order to capture such information, but it was not obtained explicitly. Our preliminary research demonstrated automatic extraction of key element and story classification in separate steps BIBREF16. In this paper, we proposed joint learning NLP models to directly extract the information of the harasser, time, location and trigger word as key elements and categorize the harassment stories in five dimensions as well. Our approach can provide an avenue to automatically uncover nuanced circumstances informing sexual harassment from online stories.
Data Collection and Annotation
We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser", “time", “location", “trigger"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.
Age of Harasser: Individual difference such as age can affect harassment behaviors. Therefore, we studied the harassers in two age groups, young and adult. Young people in this paper refer to people in the early 20s or younger.
Single/Multiple Harasser(s): Harassers may behave differently in groups than they do alone.
Type of Harasser: Person factors in harassment include the common relationships or titles of the harassers. Additionally, the reactions of people who experience harassment may vary with the harassers' relations to themselves BIBREF5. We defined 10 groups with respects to the harassers' relationships or titles. We put conductors and drivers in one group, as they both work on the public transportation. Police and guards are put in the same category, because they are employed to provide security. Manager, supervisors, and colleagues are in the work-related group. The others are described by their names.
Type of Location: It will be helpful to reveal the places where harassment most frequently occurs BIBREF7, BIBREF6. We defined 14 types of locations. “Station/stop” refers to places where people wait for public transportation or buy tickets. Private places include survivors' or harassers' home, places of parties and etc. The others are described by their names.
Time of Day: The time of an incident may be reported as “in evening” or at a specific time, e.g. “10 pm”. We considered that 5 am to 6 pm as day time, and the rest of the day as the night.
Because many of the stories collected are short, many do not contain all of the key elements. For example, “A man came near to her tried to be physical with her .”. The time and location are unknown from the story. In addition, the harassers were strangers to those they harassed in many cases. For instance, “My friend was standing in the queue to pay bill and was ogled by a group of boys.”, we can only learn that there were multiple young harassers, but the type of harasser is unclear. The missing information is hence marked as “unspecified”. It is different from the label “other", which means the information is provided but the number of them is too small to be represented by a group, for example, a “trader”.
All the data were labeled by two annotators with training. Inter-rater agreement was measured by Cohen's kappa coefficient, ranging from 0.71 to 0.91 for classifications in different dimensions and 0.75 for key element extraction (details can refer to Table 1 in supplementary file). The disagreements were reviewed by a third annotator and a final decision was made.
Proposed Models
The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively.
Proposed Models ::: CNN Based Joint Learning Models
In Figure FIGREF6, the first proposed structure consists of two layers of CNN modules.
J-CNN: To predict the type of key element, it is essential for the CNN model to capture the context information around each word. Therefore, the word along with its surrounding context of a fixed window size was converted into a context sequence. Assuming a window size of $2l + 1$ around the target word $w_0$, the context sequence is $[(w_{-l}, w_{-l+1},...w_0, ...w_{l-1},w_l)]$, where $w_i (i \in [-l,l])$ stands for the $ith$ word from $w_0$.
Because the context of the two consecutive words in the original text are only off by one position, it will be difficult for the CNN model to detect the difference. Therefore, the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions BIBREF17. That position was embedded as a $p$ dimensional vector, where $p$ is a hyperparameter. The position embeddings were learned at the training stage. Each word in the original text was then converted into a sequence of the concatenation of word and position embeddings. Such sequence was fed into the CNN modules in the first layer of the model, which output the high level word representation ($h_i, i\in [0,n-1]$, where n is the number of input words). The high level word representation was then passed into a fully connected layer, to predict the key element type for the word. The CNN modules in this layer share the same parameters.
We input the sequence of high level word representations ($h_i$) from the first layer into another layer of multiple CNN modules to categorize the harassment incident in each dimension (Figure FIGREF6). Inside each CNN module, the sequence of word representations were first passed through a convolution layer to generate a sequence of new feature vectors ($C =[c_0,c_1,...c_q]$). This vector sequence ($C$) was then fed into a max pooling layer. This is followed by a fully connected layer. Modules in this layer do not share parameters across classification tasks.
J-ACNN: We also experimented with attentive pooling, by replacing the max pooling layer. The attention layer aggregates the sequence of feature vectors ($C$) by measuring the contribution of each vector to form the high level representation of the harassment story. Specifically,
That is, a fully connected layer with non-linear activation was applied to each vector $c_{i}$ to get its hidden representation $u_{i}$. The similarity of $u_{i}$ with a context vector $u_{w}$ was measured and get normalized through a softmax function, as the importance weight $\alpha _{i}$. The final representation of the incident story $v$ was an aggregation of all the feature vectors weighted by $\alpha _{i}$. $W_{\omega }$, $b_{\omega }$ and $u_{w}$ were learned during training.
The final representation ($v$) was passed into one fully connected layer for each classification task. We also applied different attention layers for different classifications, because the classification modules categorize the incident in different dimensions, their focuses vary. For example, to classify “time of day”, one needs to focus on the time phrases, but pays more attention to harassers when classifying “age of harasser”.
J-SACNN: To further exploit the information of the key elements, we applied supervision BIBREF18 to the attentive pooling layer, with the annotated key element types of the words as ground truth. For instance, in classification of “age of harasser”, the ground truth attention labels for words with key element types of “harasser” are 1 and others are 0. To conform to the CNN structure, we applied convolution to the sequence of ground truth attention labels, with the same window size ($w$) that was applied to the word sequence (Eq. DISPLAY_FORM11).
where $\circ $ is element-wise multiplication, $e_t$ is the ground truth attention label, and the $W \in R^{w\times 1}$ is a constant matrix with all elements equal to 1. $\alpha ^{*}$ was normalized through a softmax function and used as ground truth weight values of the vector sequence ($C$) output from the convolution layer. The loss was calculated between learned attention $\alpha $ and $\alpha ^{*}$ (Eq. DISPLAY_FORM12), and added to the total loss.
Proposed Models ::: BiLSTM Based Joint Learning Models
J-BiLSTM: The model input the sequence of word embeddings to the BiLSTM layer. To extract key elements, the hidden states from the forward and backward LSTM cells were concatenated and used as word representations to predict the key element types.
To classify the harassment story in different dimensions, concatenation of the forward and backward final states of BiLSTM layer was used as document level representation of the story.
J-ABiLSTM: We also experimented on BiLSTM model with the attention layer to aggregate the outputs from BiLSTM layer (Figure FIGREF7). The aggregation of the outputs was used as document level representation.
J-SABiLSTM: Similarly, we experimented with the supervised attention.
In all the models, softmax function was used to calculate the probabilities at the prediction step, and the cross entropy losses from extraction and classification tasks were added together. In case of supervised attention, the loss defined in Eq. DISPLAY_FORM12 was added to the total loss as well. We applied the stochastic gradient descent algorithm with mini-batches and the AdaDelta update Rule (rho=0.95 and epsilon=1e-6) BIBREF19, BIBREF20. The gradients were computed using back-propagation. During training, we also optimized the word and position embeddings.
Experiments and Results ::: Experimental Settings
Data Splits: We used the same splits of train, develop, and test sets used by Karlekar and Bansal BIBREF6, with 7201, 990 and 1701 stories, respectively. In this study, we only considered single label classifications.
Baseline Models: CNN and BiLSTM models that perform classification and extraction separately were used as baseline models. In classification, we also experimented with BiLSTM with the attention layer. To demonstrate that the improvement came from joint learning structure rather the two layer structure in J-CNN, we investigated the same model structure without training on key element extraction. We use J-CNN* to denote it.
Preprocess: All the texts were converted to lowercase and preprocessed by removing non-alphanumeric characters, excluding “. ! ? ” . The word embeddings were pre-trained using fastText BIBREF21 with dimension equaling 100.
Hyperparameters: For the CNN model, the filter size was chosen to be (1,2,3,4), with 50 filters per filter size. Batch size was set to 50 and the dropout rate was 0.5. The BiLSTM model comprises two layers of one directional LSTM. Every LSTM cell has 50 hidden units. The dropout rate was 0.25. Attention size was 50.
Experiments and Results ::: Results and Discussions
We compared joint learning models with the single task models. Results are averages from five experiments. Although not much improvement was achieved in key element extraction (Figure TABREF16), classification performance improved significantly with joint learning schemes (Table TABREF17). Significance t-test results are shown in Table 2 in the supplementary file.
BiLSTM Based Models: Joint learning BiLSTM with attention outperformed single task BiLSTM models. One reason is that it directed the attention of the model to the correct part of the text. For example,
S1: “ foogreen!1.7003483371809125 foowhen foogreen!3.4324652515351772 fooi foogreen!10.76661329716444 foowas foogreen!20.388443022966385 fooreturning foogreen!9.704475291073322 foomy foogreen!6.052316632121801 foohome foogreen!2.477810252457857 fooafter foogreen!3.5612427163869143 foofinishing foogreen!4.7736018896102905 foomy foogreen!4.634172189980745 fooclass foogreen!0.6899426807649434 foo. foogreen!0.35572052001953125 fooi foogreen!0.3427551419008523 foowas foogreen!0.293194578262046 fooin foogreen!0.2028885210165754 fooqueue foogreen!0.10553237370913848 footo foogreen!0.19472737039905041 fooget foogreen!0.44946340494789183 fooon foogreen!0.5511227645911276 foothe foogreen!2.056689700111747 foomicro foogreen!2.597035141661763 foobus foogreen!2.5683704297989607 fooand foogreen!4.6382867731153965 foothere foogreen!9.827975183725357 foowas foogreen!21.346069872379303 fooa foogreen!22.295180708169937 foogirl foogreen!11.672522872686386 fooopposite foogreen!8.892465382814407 footo foogreen!18.20233091711998 foome foogreen!13.192926533520222 foojust foogreen!26.24184638261795 foothen foogreen!40.2555949985981 fooa foogreen!30.108729377388954 fooyoung foogreen!115.02625793218613 fooman foogreen!93.40204298496246 footried foogreen!58.68498980998993 footo foogreen!144.01434361934662 footouch foogreen!108.82275551557541 fooher foogreen!80.9452086687088 fooon foogreen!47.26015031337738 foothe foogreen!47.71501570940018 foobreast foogreen!19.392695277929306 foo.”
S2: “ foogreen!0.2212507533840835 foowhen foogreen!0.26129744946956635 fooi foogreen!0.3014186804648489 foowas foogreen!0.314583390718326 fooreturning foogreen!0.23829322890378535 foomy foogreen!0.018542312318459153 foohome foogreen!0.06052045864635147 fooafter foogreen!0.3865368489641696 foofinishing foogreen!0.5127551266923547 foomy foogreen!0.569560332223773 fooclass foogreen!0.037081812479300424 foo. foogreen!0.061129467212595046 fooi foogreen!0.12043083552271128 foowas foogreen!0.2053432835964486 fooin foogreen!0.038308095099637285 fooqueue foogreen!0.05270353358355351 footo foogreen!0.07939991337480024 fooget foogreen!0.14962266141083091 fooon foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa foogreen!788.9420390129089 fooyoung foogreen!199.1765946149826 fooman foogreen!0.39259070763364434 footried foogreen!0.27069455245509744 footo foogreen!0.5092779756523669 footouch foogreen!0.7033208385109901 fooher foogreen!0.6793316570110619 fooon foogreen!0.5892394692637026 foothe foogreen!0.4084075626451522 foobreast foogreen!0.14951340563129634 foo.”
S3: “ foogreen!0.23944019631017 foowhen foogreen!0.16698541003279388 fooi foogreen!0.3381385176908225 foowas foogreen!0.21315943740773946 fooreturning foogreen!0.3222442464902997 foomy foogreen!0.8483575657010078 foohome foogreen!0.10339960863348097 fooafter foogreen!0.2440519310766831 foofinishing foogreen!0.39699181797914207 foomy foogreen!1.2218113988637924 fooclass foogreen!0.1232976937899366 foo. foogreen!0.10928708070423454 fooi foogreen!0.2562549489084631 foowas foogreen!0.8099888218566775 fooin foogreen!2.9650430660694838 fooqueue foogreen!0.507337914314121 footo foogreen!0.727736041881144 fooget foogreen!0.7367140497080982 fooon foogreen!0.711284636054188 foothe foogreen!194.2763775587082 foomicro foogreen!786.8869304656982 foobus foogreen!0.4422159108798951 fooand foogreen!0.43104542419314384 foothere foogreen!0.4694198723882437 foowas foogreen!0.5085613229312003 fooa foogreen!0.4430979897733778 foogirl foogreen!0.36199347232468426 fooopposite foogreen!0.31067250529304147 footo foogreen!0.2927705936599523 foome foogreen!0.24646619567647576 foojust foogreen!0.23911069729365408 foothen foogreen!0.11775700113503262 fooa foogreen!0.002219072712250636 fooyoung foogreen!0.0019248132048232947 fooman foogreen!0.32698659924790263 footried foogreen!0.3118939639534801 footo foogreen!0.5727249081246555 footouch foogreen!0.5670131067745388 fooher foogreen!0.7104063988663256 fooon foogreen!0.6698771030642092 foothe foogreen!0.4756081907544285 foobreast foogreen!0.26600153069011867 foo.”
In S1, the regular BiLSTM with attention model for classification on “age of harasser” put some attention on phrases other than the harasser, and hence aggregated noise. This could explain why the regular BiLSTM model got lower performance than the CNN model. However, when training with key element extractions, it put almost all attention on the harasser “young man” (S2), which helped the model make correct prediction of “young harasser”. When predicting the “type of location” (S3), the joint learning model directed its attention to “micro bus”.
CNN Based Models: Since CNN is efficient for capturing the most useful information BIBREF22, it is quite suitable for the classification tasks in this study. It achieved better performance than the BiLSTM model. The joint learning method boosted the performance even higher. This is because the classifications are related to the extracted key elements, and the word representation learned by the first layer of CNNs (Figure FIGREF6) is more informative than word embedding. By plotting of t-SNEs BIBREF23 of the two kinds of word vectors, we can see the word representations in the joint learning model made the words more separable (Figure 1 in supplementary file). In addition, no improvement was found with the J-CNN* model, which demonstrated the joint learning with extraction is essential for the improvement.
With supervised attentive pooling, the model can get additional knowledge from key element labels. It helped the model in cases when certain location phrases were mentioned but the incidents did not happen at those locations. For instance, “I was followed on my way home .”, max pooling will very likely to predict it as “private places”. But, it is actually unknown. In other cases, with supervised attentive pooling, the model can distinguish “metro” and “metro station”, which are “transportation” and “stop/station” respectively. Therefore, the model further improved on classifications on “type of location” with supervised attention in terms of macro F1. For some tasks, like “time of day”, there are fewer cases with such disambiguation and hence max pooling worked well. Supervised attention improved macro F1 in location and harasser classifications, because it made more correct predictions in cases that mentioned location and harasser. But the majority did not mention them. Therefore, the accuracy of J-SACNN did not increase, compared with the other models.
Classification on Harassment Forms: In Table TABREF18, we also compared the performance of binary classifications on harassment forms with the results reported by Karlekar and Bansal karlekar2018safecity. Joint learning models achieved higher accuracy. In some harassment stories, the whole text or a span of the text consists of trigger words of multiple forms, such as “stare, whistles, start to sing, commenting”. The supervised attention mechanism will force the model to look at all such words rather than just the one related to the harassment form for classification and hence it can introduce noise. This can explain why J-SACNN got lower accuracy in two of the harassment form classifications, compared to J-ACNN. In addition, J-CNN model did best in “ogling” classification.
Patterns of Sexual Harassment
We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.
Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.
In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street.
These results can provide valuable information for all members of the public. Sharing stories of harassment has been found by researchers to shift people’s cognitive and emotional orientation towards their traumatic experiences BIBREF24. Greater awareness of patterns and scale of harassment experiences promises to ensure those who have been subjected to this violence that they are not alone, empowering others to report incidents, and ensuring them that efforts are being made to prevent others from experiencing the same harassment. These results also provide various authorities tools to identify potential harassment patterns and to make more effective interventions to prevent further harassment incidents. For instance, the authorities can increase targeted educational efforts at youth and adults, and be guided in utilizing limited resources the most effectively to offer more safety measures, including policing and community-based responses. For example, focusing efforts on highly populated public transportation during the nighttime, when harassment is found to be most likely to occur.
Conclusions
We provided a large number of annotated personal stories of sexual harassment. Analyzing and identifying the social patterns of harassment behavior is essential to changing these patterns and social tolerance for them. We demonstrated the joint learning NLP models with strong performances to automatically extract key elements and categorize the stories. Potentiality, the approaches and models proposed in this study can be applied to sexual harassment stories from other sources, which can process and summarize the harassment stories and help those who have experienced harassment and authorities to work faster, such as by automatically filing reports BIBREF6. Furthermore, we discovered meaningful patterns in the situations where harassment commonly occurred. The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities. Our work can increase the understanding of sexual harassment in society, ease the processing of such incidents by advocates and officials, and most importantly, raise awareness of this urgent problem.
Acknowledgments
We thank the Safecity for granting the permission of using the data. | joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) |