Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:06:57.840436Z"
},
"title": "Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects",
"authors": [
{
"first": "Jianmo",
"middle": [],
"last": "Ni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "San Diego"
}
},
"email": ""
},
{
"first": "Jiacheng",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "San Diego"
}
},
"email": "j9li@ucsd.edu"
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "San Diego"
}
},
"email": "jmcauley@ucsd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",
"pdf_parse": {
"paper_id": "D19-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Explaining, or justifying, recommendations to users has the potential to increase their transparency and reliability. However providing meaningful interpretations remains a difficult task, partly due to the black-box nature of many recommendation models, but also because we simply lack ground-truth datasets specifying what 'good' justifications ought to look like.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has sought to learn user preferences and writing styles from crowd-sourced reviews (Dong et al., 2017; Ni and McAuley, 2018) Review examples: I love this little stand! The coconut mocha chiller and caramel macchiato are delicious. Wow what a special find. One of the most unique and special date nights my husband and I have had.",
"cite_spans": [
{
"start": 97,
"end": 116,
"text": "(Dong et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 117,
"end": 138,
"text": "Ni and McAuley, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tip examples: Great food. Nice ambiance. Gnocchi were very good. I can't get enough of this place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The food portions were huge. Plain cheese quesadilla is very good and very cheap. Table 1 : In contrast to reviews and tips, we seek to automatically generate recommendation justifications that are more concise, concrete, and helpful for decision making. Examples of justifications from reviews, tips, and our annotated dataset are marked in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "to generate explanations in the form of natural language, e.g. generating synthesized reviews similar to those that users would write about a product. However, a large portion of review text (or text from 'tips') is often of little relevance to most users' decision making (e.g. they describe verbose experiences or general endorsements) and may not be appropriate to use as explanations in terms of content and language style. As a result, existing models that learn directly from reviews (or tips) may not capture crucial information that explains users' purchases. Table 1 shows examples of reviews, tips and ideal justifications. More recently, there has been work studying the task of tip generation where tips are concise summaries of reviews (Li et al., 2017) . Though tips are concise and some subset of them might be suitable as candidates for recommendation justifications, only a few ecommerce systems provide tips accompanied with reviews. Even in systems where tips are available, the number of tips is usually far smaller than the number of reviews. These approaches hence suffer from generalizability issues, especially in settings where user interactions are highly sparse.",
"cite_spans": [
{
"start": 749,
"end": 766,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 568,
"end": 575,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "On the other hand, generating diverse responses is essential in personalized content generation scenarios such as justification generation. Instead of always predicting the most popular reasons, it's preferable to present diverse justifications for different users based on their personal interests. Recent work has shown that incorporating prior knowledge into generation frameworks can greatly improve diversity. Prior knowledge could include story-lines in story generation (Yao et al., 2019) , or historical responses in dialogue systems (Weston et al., 2018) .",
"cite_spans": [
{
"start": 477,
"end": 495,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 542,
"end": 563,
"text": "(Weston et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "In this work, our goal is to generate convincing and diverse justifications. To address the challenge of lacking ground-truth data about 'good' justifications, we propose a pipeline that can identify justifications from massive corpora of reviews or tips. We extract fine-grained aspects from justifications and build user personas and item profiles consisting of sets of representative aspects. To improve generation quality and diversity, we propose two generation models (1) a reference-based Seq2Seq model with aspect-planning, which takes previous justifications as a reference and can produce justifications based on different aspects, and (2) an aspect-conditional masked language model that can generate diverse justifications from templates extracted from previous justifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "Our contributions are threefold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "\u2022 To facilitate recommendation justification generation, we propose a pipeline to identify justification candidates and build aspect-based user personas and item profiles from massive corpora of reviews. With this approach, we are able to build large-scale personalized justification datasets. We use these extractive justification segments in the task of explainable recommendation and show that these are better training sources instead of whole reviews. \u2022 We propose two models based on reference attention, aspect-planning techniques and a persona-conditional masked language model. We show that adding such personalized information enables the models to generate justifications with high quality and diversity. \u2022 We conduct extensive experiments on two realworld datasets from Yelp and Amazon Clothing. We provide an annotated dataset about 'good' justifications on the Yelp dataset and show that the binary classifier trained on this dataset gener-alizes well to the Amazon Clothing dataset. We study different decoding strategies and compare their effect on generation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Justification examples:",
"sec_num": null
},
{
"text": "In this section, we introduce the pipeline to extract high quality justifications from raw user reviews. Specifically, our goal here is to identify review segments that can be used as justifications and build a personalized justification dataset upon them. Our pipeline consists of three steps: 1. Annotating a set of review segments with binary labels, i.e., to determine whether they are 'good' or 'bad' justifications. 2. Training a classifier on the annotated subset and applying it to distantly label all the review segments to extract 'good' justifications for each user and item pair. 3. Applying fine-grained aspect extraction for the extracted justifications, and building user personas and item profiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Generation",
"sec_num": "2"
},
{
"text": "The first step is to extract text segments from reviews that are appropriate to use as justifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Justifications From Reviews",
"sec_num": "2.1"
},
{
"text": "Instead of a complete sentence or phrase, we define each segment as an Elementary Discourse Unit (EDU; (Mann and Thompson, 1988) ) which corresponds to a sequence of clauses. We use the model of to obtain EDUs from reviews. Recent works have shown that EDUs can improve the performance of document-level summarization (Bhatia et al., 2015) and opinion summarizaiton (Angelidis and Lapata, 2018) . After preprocessing the reviews into EDUs, we analyzed the linguistic differences between recommendation justifications and reviews, and built two rules to filter the segments that are unlikely to be suitable justifications: (1) segments with first-person or third-person pronouns, and (2) too long or short. Next, two expert annotators were exposed to 1,000 segments among those not filtered out and asked to determine whether they are 'good' justifications. Labeling was performed iteratively, followed by feedback and discussion, until the quality was aligned between the two annotators. At the end of the process, the interannotator agreement for the binary labeling task (good vs. bad), measured by Cohen's kappa (Cohen, 1960) , was 0.927 after alignment. Then, the annotators further labeled 600 segments. Overall, 24.8% of the segments were labeled good.",
"cite_spans": [
{
"start": 97,
"end": 102,
"text": "(EDU;",
"ref_id": null
},
{
"start": 103,
"end": 128,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF20"
},
{
"start": 318,
"end": 339,
"text": "(Bhatia et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 366,
"end": 394,
"text": "(Angelidis and Lapata, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 1101,
"end": 1128,
"text": "Cohen's kappa (Cohen, 1960)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Justifications From Reviews",
"sec_num": "2.1"
},
{
"text": "Our next step is to propagate labels to the complete review corpus. Here we adopt BERT (Devlin et al., 2019) to fine-tune on our classification task, where a [CLS] token is added to the beginning of each segment and the final hidden state (i.e., output of BERT) corresponding to this token is fed into a linear layer to obtain the binary prediction. Cross entropy is used as the training loss. We split the annotated dataset into Train, Dev, and Test sets with a 0.8/0.1/0.1 ratio, fine-tune the BERT classifier on the Train set and choose the best model on the Dev set. After three epochs of fine-tuning, BERT can achieve an F1-score of 0.80 on the Test set. We compare the performance of BERT with multiple baseline models: (1) a XGBoost model which uses Bags-of-Words as sentence features (2) a convolutional neural network (CNN) with three convolution layers and one linear layer (3) a long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) network with a max-pooling layer, and a linear layer (4) a BERT sentiment classifier (BERT-SA) trained on the complete Yelp dataset for one epoch and three epochs. To obtain the pre-trained word embeddings for the CNN and LSTM models, we applied fastText (Bojanowski et al., 2016) on the Yelp Review dataset. We set the embedding dimension to 200 and used default values for other hyper-parameters. Table 2 presents results for our binary classification task. The BERT classifier has higher F1-score and precision than other classifiers. The BERT-SA model after three epochs only achieves an F1score of 0.491, which confirms the difference between sentiment analysis and our good/bad task, i.e., even if the segment has positive sentiment, it might be not suitable as a justification.",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 158,
"end": 163,
"text": "[CLS]",
"ref_id": null
},
{
"start": 920,
"end": 954,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF12"
},
{
"start": 1210,
"end": 1235,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1354,
"end": 1361,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Automatic Classification",
"sec_num": "2.2"
},
{
"text": "The Tuna is pretty amazing Appetizers and pasta are excellent here An excellent selection of both sweet and savory crepes It was filled with delicious food, fantastic music and dancing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Yelp",
"sec_num": null
},
{
"text": "The quality of the material is great Great shirt, especially for the price. The seams and stitching are really nice Fit the bill for a Halloween costume. Table 3 : Examples of justifications with fine-grained aspects in our annotated dataset. The fine-grained aspects are italic and underlined.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Amazon-Cloth",
"sec_num": null
},
{
"text": "Finally, we extract the fine-grained aspects that each justification covers. Fine-grained aspects are properties of products that appear among a user's opinions. We adopt the method proposed by Zhang et al. (2014) to build a sentiment lexicon which includes a set of fine-grained aspects from the whole dataset. We then use simple rules to determine which aspects appear in each justification. 1 Table 3 presents a set of examples from our dataset. Each example consists of a justification that a user has written about an item, and multiple fine-grained aspects mentioned in the justification. Note that we only annotated the Yelp dataset, trained a classifer on that and applied the model on both Yelp and Amazon Clothing dataset. As shown in Table 3 , the trained classifier works well on both datasets.",
"cite_spans": [
{
"start": 194,
"end": 213,
"text": "Zhang et al. (2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 396,
"end": 403,
"text": "Table 3",
"ref_id": null
},
{
"start": 745,
"end": 752,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fine-grained Aspect Extraction",
"sec_num": "2.3"
},
{
"text": "For each user u (or item i), we build a justification reference D = {d 1 , . . . , d lr } consisting of justifications that the user has written (or justifications about the item) on the training set, where l r is the maximum number of justifications. We also obtain a user persona (or item profile) A = {a 1 , . . . , a K } based on the fine-grained aspects that the user's (or item's) previous justifications have covered, where K is the maximum number of aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "Given a user u and an item i, as well as the their justification reference D u and D i , and u's persona A u and i's profile A i , our target is to predict the Figure 1 : Structure of the reference-based Seq2Seq model with Aspect Planning justifications J u,i = {w 1 , w 2 , . . . , w T } that would explain why item i fits user u's interests, where T is the length of the justification.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "Our base model follows the structure of a standard Seq2Seq (Sutskever et al., 2014) model. Our framework, called 'Ref2Seq', views the historical justifications of users and items as references and learns latent personalized features from them. Figure 1 shows the structure of our Reference-based Seq2Seq Model. It includes two components: (1) two sequence encoders that learn user and item latent representations by taking previous justifications as references; (2) a sequence decoder incorporating representations from users and items to generate personalized justifications.",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 244,
"end": 250,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "Sequence Encoders. Our user encoder and sequence encoder share the same structure, which includes an embedding layer, a two-layer bidirectional GRU (Cho et al., 2014) , and a projection layer. The inputs are a user (or item) reference D consisting of a set of historical justifications. These justifications pass a word embedding layer, then go through the GRU and yield a sequence of hidden states e \u2208 R ls\u00d7lr\u00d7n :",
"cite_spans": [
{
"start": 148,
"end": 166,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E = Embedding(D), e = GRU(E) = \u2192 e + \u2190 e ,",
"eq_num": "(1)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where l s denotes the length of the sequence, n is the hidden size of the encoder GRU, E \u2208 R ls\u00d7lr\u00d7n is the embedded sequence representation, and \u2192 e and \u2190 e are the hidden vectors produced by a forward and a backward GRU (respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "To combine information from different 'references' (i.e. justifications), the hidden states are then projected via a linear layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = W e \u2022 e + b e ,",
"eq_num": "(2)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where\u00ea \u2208 R ls\u00d7n is the final output of the encoder, and W e \u2208 R lr , b e \u2208 R are learned parameters. Sequence decoder. The decoder is a two-layer GRU that predicts the target words given a start token. The hidden state of the decoder is initialized using the sum of the last hidden state of the user and item encoders. The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word. Specifically:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 0 = e u ls + e i ls , h t = GRU(w t , h t\u22121 ),",
"eq_num": "(3)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where e u ls and e i ls are the last hidden states of the user and item encoder output\u00ea u and\u00ea i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "To explore the relation between the reference and generation, we apply an attention fusion layer to summarize the output of each encoder. For the user and item reference encoder, the attention vector is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a 1 t = ls j=1 \u03b1 1 tj e j , \u03b1 1 tj = exp(tanh(v 1 \u03b1 (W 1 \u03b1 [e j ; h t ] + b 1 \u03b1 )))/Z,",
"eq_num": "(4)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where a 1 t \u2208 R n is an attention vector on the sequence encoder at time-step t, \u03b1 1 tj is an attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "Aspect-Planning Generation. One of the challenges for generating justifications is how to improve controllability, i.e., directly manipulate the content being generated. Inspired by 'plan-andwrite' (Yao et al., 2019) , we extend the base model to an Aspect-Planning Ref2Seq (AP-Ref2Seq) model where we plan a fine-grained aspect before generation. This aspect planning can be considered as an extra form of supervision instead of a hard constraint to make justification generation more controllable.",
"cite_spans": [
{
"start": 198,
"end": 216,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "When generating the justification for user u and item i, we first provide a fine-grained aspect a as a plan. The aspect a is fed into the word embedding layer to obtain the aspect embedding E a . Then, we compute the scores between the embedding of the aspect and the decoder hidden state as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a 2 t = \u03b1 2 t E a , \u03b1 2 t = exp(tanh(v 2 \u03b1 (W 2 \u03b1 [E a ; h t ] + b 2 \u03b1 )))/Z,",
"eq_num": "(5)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where a 2 t \u2208 R n is an attention vector and \u03b1 2 t is an attention score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "The attention vectors a 1 ut of user u, a 1 it of item i, and a 2 t of fine-grained aspect a, are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P . The output probability for word w at time-step t is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t ) = tanh(W 1 [h t ; a 1 ut ; a 1 it ; a 2 t ] + b 1 ),",
"eq_num": "(6)"
}
],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "where w t is the target word at time-step t. Given the probability p(w t ) at each time step t, the model is trained using a cross-entropy loss compared against the ground-truth sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference-based Seq2Seq Model",
"sec_num": "3.2"
},
{
"text": "Though Seq2Seq-based models can achieve high quality output, they often fail to generate diverse content. Recent works in natural language generation (NLG) tried to combine generation methods with information retrieval techniques to increase the generation diversity Baheti et al., 2018) . The basic idea follows the paradigm of retrieve-and-edit-which is to first retrieve historical responses as templates, and then edit the template into new content. Since our data is annotated with fine-grained aspects, it naturally fits into this type of retrieve-and-edit paradigm. Meanwhile, masked language models have shown great performance in language modeling. Recent work Mansimov et al., 2019) has shown that by sampling from the masked language model (e.g. BERT), it is able to generate coherent sentences.",
"cite_spans": [
{
"start": 267,
"end": 287,
"text": "Baheti et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 670,
"end": 692,
"text": "Mansimov et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "Inspired by this work, we want to extend such an approach into a conditional version-we explore the use of an Aspect Conditional Masked Language Model (ACMLM) to generate diverse personalized justifications. Figure 2 shows the structure of our Aspect Conditional Masked Language Model. For a justification J u,i that user u wrote about item i, we adapt the pre-trained BERT model (Devlin et al., 2019) into an encoderdecoder network with (1) an aspect encoder which encodes the user persona and item profile into latent representations and (2) a masked language model sequence decoder that takes in a masked justification and predicts the masked tokens.",
"cite_spans": [
{
"start": 380,
"end": 401,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 208,
"end": 216,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "Aspect Encoder. Our aspect encoder shares the same WordPiece embeddings (Wu et al., 2016) as BERT. The encoder feeds the intersection of fine-grained aspects from the user persona and item profile A ui = {a 1 , . . . , a K } into the embedding layer and obtains the aspect embedding A ui \u2208 R K \u00d7n , where K is the number of common fine-grained aspects and n is the dimension of the WordPiece embeddings.",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "Masked Language Model Sequence Decoder. We use the masked language model in the pretrained BERT model as our sequence decoder and add attention over the aspect encoder's output. As shown in Figure 2 , the input to the decoder is a masked justification J M u,i = {w 1 , . . . , w T } with multiple tokens be replaced as [MASK] . The decoder's output T \u2208 R T \u00d7n is then fed to the attention layer to calculate an attention score with the output of the encoder:",
"cite_spans": [
{
"start": 319,
"end": 325,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a 3 t = K j=1 \u03b1 3 tj A j , \u03b1 3 tj = exp(tanh(v 3 \u03b1 (W 3 \u03b1 [A j ; T t ] + b 3 \u03b1 )))/Z.",
"eq_num": "(7)"
}
],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "The attention vector a 3 t is then concatenated with the decoder hidden state at time-step t and sent to a linear projection layer to obtain the output word distribution P . The output probability for word w at time-step t is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t ) = tanh(W 2 [T t ; a 3 t ] + b 2 )",
"eq_num": "(8)"
}
],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "where w t is the target word at time-step t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "Masking Procedure. The original BERT paper applies a flat rate (15%) to decide whether to mask a token. Unlike their approach, we adopt a higher rate to mask fine-grained aspects since they are more important in justifications. Specifically, if we encounter a fine-grained aspect, we will replace it with a [MASK] token 30% of the time; while for other words, we will replace them with a [MASK] token 15% of the time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "During training, the model will only predict those masked tokens and calculate a cross-entropy loss on them. Generation by Sampling from Masked Templates. We next discuss how to generate justifications from the trained ACMLM. We follow the sampling strategy of to generate justifications. Instead of generating from a sequence of all [MASK] tokens, we start with masked templates generated from historical justifications about the target item. These masked templates include prior knowledge about the item and can increase the speed of sampling convergence. Table 4 shows an example of the generation process. We initialize the template sequence X 0 as (universe, [MASK], . . . , ##ble) with length T . At each iteration i, a position t i is sampled uniformly at random from {1, . . . , T } and the token at t i (i.e. x i t i ) of the current sequence X i is replaced by [MASK] . After that, we obtain the conditional probability of",
"cite_spans": [
{
"start": 871,
"end": 877,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t i as p(x t i |X i \\t i = 1 Z(X i \\t i ) exp(1h(x t i ) f \u03b8 (X i \\t i ))),",
"eq_num": "(9)"
}
],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "where 1h(x t i ) is a one-hot vector with index x t i set to 1, X i \\t i is the sequence we obtain after replacing the token at position t i of X i by [MASK], f \u03b8 (X i \\t i ) is the output after feeding X i \\t i into the ACMLM as in Equation 8, and Z is the normalization term. We then samplex ti from Equation (9), and construct the next sequence by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "X i+1 = (x i 1 , . . . ,x ti , . . . , x i T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "After repeating this procedure N times, the final output is considered as the generation output. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect Conditional Masked Language Model",
"sec_num": "3.3"
},
{
"text": "With our proposed pipeline (Section 2), we construct two personalized justification datasets from existing review data-Yelp and Amazon Clothing. 34 We further filter those users with fewer than five justifications. For each user, we randomly hold out two samples from all of their justifications to construct the Dev and Test sets. Table 5 shows the statistics of our two datasets.",
"cite_spans": [
{
"start": 145,
"end": 147,
"text": "34",
"ref_id": null
}
],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "For automatic evaluation, we consider three baselines: Item-Rand is a baseline which randomly chooses a justification from the item's historical justifications. LexRank is a strong unsupervised baseline that is widely used in text summarization (Erkan and Radev, 2004) . Given all historical justifications about an item, LexRank can select one justification as the summary. We then use that as the justification for all users. Attr2Seq (Dong et al., 2017 ) is a Seq2Seq baseline that uses attributes (i.e. user and item identity) as input.",
"cite_spans": [
{
"start": 245,
"end": 268,
"text": "(Erkan and Radev, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 437,
"end": 455,
"text": "(Dong et al., 2017",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "By default, all models use beam search during generation. Recently, there have been works showing that the generation output of sampling methods is more diverse and suitable on highentropy tasks (Holtzman et al., 2019) . To this end, we explore another decoding strategy-'Top-k sampling' (Radford et al., 2019) and include a variant of our model: Ref2Ref (Topk). 5 For human evaluation, we include two baselines: Ref2Seq (Review) and Ref2Seq (Tip), both of which are the same model as Ref2Seq model but trained on the original review and tip data, respectively. Comparisons with these two baselines demonstrates that training on our annotated dataset tends to generate text more suitable as justifications.",
"cite_spans": [
{
"start": 195,
"end": 218,
"text": "(Holtzman et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 288,
"end": 310,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We use PyTorch 6 to implement our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Detail",
"sec_num": "4.3"
},
{
"text": "For Req2Seq and AP-Ref2Seq, we set the hidden size and word embedding size as 256. We apply a dropout rate of 0.5 for the encoder and 0.2 for the decoder. The size of the justification reference l r is set to 5 and the number of fine-grained aspects K in the user persona and item profile is set to 30. We train the model using Adam with learning rate 2e \u22124 and stop training either when it reaches 20 epochs or the perplexity does not improve (on the Dev set). For ACMLM, we build our model based on the BERT implementation from HuggingFace. 7 We initialize our decoder using the pre-trained 'Bert-base' model and set the max sequence length to 30. We train the model for 5 epochs using Adam with learning rate 2e \u22125 . For models using beam search, we set the beam size as 10. For models using 'top-k' sampling, we set 5 At each time step, the next word is sampled from the top k possible next tokens, according to their probabilities.",
"cite_spans": [
{
"start": 543,
"end": 544,
"text": "7",
"ref_id": null
},
{
"start": 820,
"end": 821,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Detail",
"sec_num": "4.3"
},
{
"text": "6 http://pytorch.org/docs/master/index.html 7 https://github.com/huggingface/pytorch-pretrained-BERT k to 5. For ACMLM, we use a burn-in step equal to the length of the initial sequence. Our data and code are available online. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Detail",
"sec_num": "4.3"
},
{
"text": "For automatic evaluation, we use BLEU, Distinct-1, and Distinct-2 (Li et al., 2015) to measure the performance of our model. As shown in Table 6 , our reference-based models achieve the highest BLEU scores on both datasets except for BLEU-3 on Yelp. This confirms that Ref2Seq is able to capture user and item content to generate the most relevant content, compared with unpersonalized models such as LexRank and personalized models that do not leverage historical justifications such as Attr2Seq.",
"cite_spans": [
{
"start": 66,
"end": 83,
"text": "(Li et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "On the other hand, recent works have reported that models achieving higher diversity scores will have lower scores on overlap-based metrics (e.g. BLEU) for open-domain generation tasks (Baheti et al., 2018; Gao et al., 2018) . We make a similar observation for our personalized justifi- cation generation task. As shown in Table 6 , both sampling-based methods Ref2Seq (Top-k) and ACMLM achieve higher Distinct-1 and Distinct-2, while their BLEU scores are lower than Seq2Seq based models using beam search. Therefore, we also perform human evaluation to validate the generation quality of our proposed methods.",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "(Baheti et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 207,
"end": 224,
"text": "Gao et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 323,
"end": 330,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.4"
},
{
"text": "We conduct human evaluation on three aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "(1) Relevance measures whether the generated output contains information relevant to an item;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "(2) Informativeness measures whether the generated justification includes specific information that is helpful to users; and (3) Diversity measures how distinct the generated output is compared with other justifications. We focus on the Yelp dataset and sample 100 generated examples from each of the five models as shown in Table 7 . Human annotators are asked to give a score in the range [1,5] (lowest to highest) for each metric. Each example is rated by at least three annotators. The results show that both Ref2Seq (Top-k) and ACMLM achieve higher scores on Diversity and Informativeness compared to other models.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.5"
},
{
"text": "Here we study the following two qualitative questions: RQ1: How do training data and methods affect generation? As Table 8 shows, models trained on reviews and tips tend to generate generic phrases (such as 'i love this place') which often do not include information that helps users to make de-",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "4.6"
},
{
"text": "Aspects Generated Output Yelp dining the dining room is nice pastry the pastries were pretty good chicken the chicken fried rice is the best sandwich the pulled pork sandwich is the best thing on the menu product great product , fast shippong Amazon-price design is nice , good price Clothing leather comfortable leather sneakers . classic walking sturdy , great city walking shoes Table 9 : Generated justifications from AP-Ref2Seq. The planned aspects are randomly selected from users' personas.",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "cisions. Other models trained on the justification datasets tend to mention concrete information (e.g. different aspects). LexRank tends to generate relevant but short content. Meanwhile, samplingbased models are able to generate more diverse content. RQ2: How does aspect planning affect generation? To mitigate the trade-off between diversity and relevance, one approach is to add more constraints during generation such as constrained Beam Search (Anderson et al., 2017) . In our work, we extend our base model Ref2Seq by incorporating aspect-planning to guide generation. As shown in Table 9 , most planned aspects are present in the generated outputs of AP-Req2Seq.",
"cite_spans": [
{
"start": 450,
"end": 473,
"text": "(Anderson et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 588,
"end": 595,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Explainable Recommendation There has been a line of work that studies how to improve the explainability of recommender systems. Catherine and Cohen (2017) learn latent representations of review text to predict ratings. These representations are then used to find the most helpful reviews for given a particular user and item pair. Another popular direction is to generate text to justify recommendations. Dong et al. (2017) proposed an attribute-to-sequence model to generate product reviews which utilizes categorical attributes. Ni et al. (2017) developed a multi-task learning method that considers collaborative filter and review generation. Li et al. (2019b) generated tips by considering 'persona' information which can capture the language style of users and characteristics of items. However, these works use whole reviews or tips as training examples, which may not be appropriate due to the quality of review text. More recently, Liu et al. (2019) proposed a framework to generate fine-grained explanations for text classification. To achieve labels for human-readable explanations, they constructed a dataset from a website which provides ratings and fine-grained summaries written by users. Unfortunately, most websites do not provide such finegrained information. On the other hand, our work identifies justifications from reviews, uses them as training examples and shows these are better data source for explainable recommendation via extensive experiments.",
"cite_spans": [
{
"start": 405,
"end": 423,
"text": "Dong et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 646,
"end": 663,
"text": "Li et al. (2019b)",
"ref_id": "BIBREF17"
},
{
"start": 940,
"end": 957,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Diversity-aware NLG Diversity is an important aspect of NLG systems. Recent works have focused on digesting prior knowledge to improve generation diversity. Yao et al. (2019) proposed a method to incorporate planned story-lines in story generation. Li et al. (2019a) developed an aspectaware coarse-to-fine review generation method. They predict an aspect for each sentence in the review to capture the content flow. Given the aspects, a sequence of sentence sketches is generated and a decoder will fill in the slots of each sketch. In dialogue systems, several works have studied frameworks to extract templates from historical responses, which are then edited to form new responses (Weston et al., 2018; Wu et al., 2018) . Similarly, the extract-and-edit paradigm has been studied in style transfer tasks in NLG . proposed an attribute aware masked language model for nonparallel sentiment transfer. They first mask out the sentimental tokens and then train a masked language model to infill the masked positions for tar-get sentiment. In this work, we also introduces a conditional masked language model but considers more fine-grained aspects.",
"cite_spans": [
{
"start": 157,
"end": 174,
"text": "Yao et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 249,
"end": 266,
"text": "Li et al. (2019a)",
"ref_id": "BIBREF16"
},
{
"start": 685,
"end": 706,
"text": "(Weston et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 707,
"end": 723,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this work, we studied the problem of personalized justification generation. To build high quality justification datasets, we provided an annotated dataset and proposed a pipeline to extract justifications from massive review corpora. To generate convincing and diverse justifications, we developed two models: (1) Ref2Seq which leverages historical justifications as references during generation; and (2) ACMLM, which is an aspect conditional model built on a pre-trained masked language model. Our experiments showed that Ref2Seq achieves higher scores (in terms of BLEU) and ACMLM achieves higher diversity scores compared with baselines. Human evaluation showed that reference-based models obtain high relevance scores and sampling based methods led to more diverse and informative outputs. Finally, we showed that aspect-planning is a promising way to guide generation to produce personlized and relevant justifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For each aspect, if its singular or plural exists in the tokenized justification, then we consider that this aspect exists in that justification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We set N proportional to the length T of the initial masked template to prevent the generation diverging too much from the original template.3 https://www.yelp.com/dataset/challenge 4 http://jmcauley.ucsd.edu/data/amazon",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/nijianmo/recsys justification.git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Guided open vocabulary image captioning with constrained beam search",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary im- age captioning with constrained beam search. In EMNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised",
"authors": [
{
"first": "Stefanos",
"middle": [],
"last": "Angelidis",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanos Angelidis and Mirella Lapata. 2018. Summa- rizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating more interesting responses in neural conversation models with distributional constraints",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Baheti",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and William B. Dolan. 2018. Generating more interesting responses in neural conversation models with distributional constraints. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Better document-level sentiment analysis from rst discourse parsing",
"authors": [
{
"first": "Parminder",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from rst discourse parsing. In EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transnets: Learning to transform for recommendation",
"authors": [
{
"first": "Rose",
"middle": [],
"last": "Catherine",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rose Catherine and William W. Cohen. 2017. Transnets: Learning to transform for recommenda- tion. In RecSys.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "G\u00fclehre",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, aglar G\u00fclehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. In EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "Jacob Willem",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Willem Cohen. 1960. A coefficient of agreement for nominal scales.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to generate product reviews from attributes",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In EACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "J. Artif. Intell. Res",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res., 22:457-479.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating multiple diverse responses for short-text conversation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junhui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, and Shum- ing Shi. 2018. Generating multiple diverse re- sponses for short-text conversation. In AAAI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735- 1780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. CoRR, abs/1904.09751.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B. Dolan. 2015. A diversity-promoting objective function for neural conversation models. In HLT-NAACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Delete, retrieve, generate: A simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Percy",
"middle": [
"S"
],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy S. Liang. 2018. Delete, retrieve, generate: A simple approach to sen- timent and style transfer. In NAACL-HLT.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Generating long and informative reviews with aspect-aware coarse-to-fine decoding",
"authors": [
{
"first": "Junyi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"Xin"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, and Yang Song. 2019a. Generating long and informative re- views with aspect-aware coarse-to-fine decoding. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Persona-aware tips generation",
"authors": [
{
"first": "Piji",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zihao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piji Li, Zihao Wang, Lidong Bing, and Wai Lam. 2019b. Persona-aware tips generation. In WWW.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural rating regression with abstractive tips generation for recommendation",
"authors": [
{
"first": "Piji",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zihao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaochun",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam. 2017. Neural rating regression with ab- stractive tips generation for recommendation. In SI- GIR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards explainable nlp: A generative explanation framework for text classification",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qingyu",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Liu, Qingyu Yin, and William Yang Wang. 2019. Towards explainable nlp: A generative explanation framework for text classification. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rhetorical structure theory: toward a functional theory of text",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: toward a functional the- ory of text.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A generalized framework of sequence generation with application to undirected sequence models",
"authors": [
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elman Mansimov, Alex Wang, and Kyunghyun Cho. 2019. A generalized framework of sequence genera- tion with application to undirected sequence models. ArXiv, abs/1905.12790.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Estimating reactions and recommending products with generative models of reviews",
"authors": [
{
"first": "Jianmo",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Sharad",
"middle": [],
"last": "Vikram",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"J"
],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianmo Ni, Zachary C. Lipton, Sharad Vikram, and Ju- lian J. McAuley. 2017. Estimating reactions and rec- ommending products with generative models of re- views. In IJCNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Personalized review generation by expanding phrases and attending on aspect-aware representations",
"authors": [
{
"first": "Jianmo",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianmo Ni and Julian McAuley. 2018. Personalized re- view generation by expanding phrases and attending on aspect-aware representations. In ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bert has a mouth, and it must speak: Bert as a markov random field language model",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang and Kyunghyun Cho. 2019. Bert has a mouth, and it must speak: Bert as a markov random field language model. CoRR, abs/1902.04094.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Toward fast and accurate neural discourse segmentation",
"authors": [
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingfeng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmen- tation. In EMNLP.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Retrieve and refine: Improved sequence generation models for dialogue",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Miller",
"suffix": ""
}
],
"year": 2018,
"venue": "SCAI@EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Emily Dinan, and Alexander H. Miller. 2018. Retrieve and refine: Improved sequence gen- eration models for dialogue. In SCAI@EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Mask and infill: Applying masked language model to sentiment transfer",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liangjun",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Jizhong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Songlin",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Mask and infill: Applying masked language model to sentiment transfer. In IJ- CAI.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and ma- chine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Response generation by context-aware prototype editing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yunli",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhou- jun Li, and Ming Zhou. 2018. Response generation by context-aware prototype editing. In AAAI.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Planand-write: Towards better automatic storytelling",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan- and-write: Towards better automatic storytelling. In AAAI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Explicit factor models for explainable recommendation based on phrase-level sentiment analysis",
"authors": [
{
"first": "Yongfeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yiqun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shaoping",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2014,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit fac- tor models for explainable recommendation based on phrase-level sentiment analysis. In SIGIR.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Performance for classifying review segments as good or bad for recommendation justification.",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table/>",
"text": "Examples of the generation output of ACMLM at different iterations.",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>Dataset</td><td/><td>Yelp</td><td/><td/><td/><td colspan=\"2\">Amazon Clothing</td><td/></tr><tr><td>Model</td><td colspan=\"8\">BLEU-3 BLEU-4 Distinct-1 Distinct-2 BLEU-3 BLEU-4 Distinct-1 Distinct-2</td></tr><tr><td>Item-Rand</td><td>0.440</td><td>0.150</td><td>2.766</td><td>20.151</td><td>1.620</td><td>0.680</td><td>2.400</td><td>11.853</td></tr><tr><td>LexRank</td><td>2.290</td><td>0.920</td><td>1.738</td><td>8.509</td><td>3.480</td><td>2.250</td><td>2.407</td><td>14.956</td></tr><tr><td>Attr2seq</td><td>7.890</td><td>0.000</td><td>0.049</td><td>0.095</td><td>1.720</td><td>0.560</td><td>0.076</td><td>0.352</td></tr><tr><td>Ref2Seq</td><td>4.380</td><td>2.450</td><td>0.188</td><td>1.163</td><td>8.780</td><td>5.670</td><td>0.141</td><td>1.240</td></tr><tr><td>AP-Ref2Seq</td><td>3.390</td><td>1.830</td><td>0.326</td><td>2.094</td><td>13.910</td><td>12.500</td><td>0.557</td><td>3.661</td></tr><tr><td>Ref2Seq (Top-k)</td><td>1.630</td><td>0.700</td><td>0.818</td><td>11.927</td><td>3.960</td><td>2.130</td><td>0.697</td><td>10.858</td></tr><tr><td>ACMLM</td><td>0.700</td><td>0.280</td><td>1.322</td><td>14.319</td><td>2.420</td><td>1.590</td><td>0.942</td><td>9.312</td></tr></table>",
"text": "Statistics of our datasets.",
"type_str": "table",
"html": null
},
"TABREF7": {
"num": null,
"content": "<table/>",
"text": "Performance on Automatic Evaluation.",
"type_str": "table",
"html": null
},
"TABREF9": {
"num": null,
"content": "<table><tr><td>: Performance on Human Evaluation, where</td></tr><tr><td>R,I,D represents Relevance, Informativeness and</td></tr><tr><td>Diversity, respectively.</td></tr></table>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF11": {
"num": null,
"content": "<table/>",
"text": "Comparisons of the generated justifications from different models for three businesses on the Yelp dataset.",
"type_str": "table",
"html": null
}
}
}
}