ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:27:45.722782Z"
},
"title": "Controlled Text Generation with Adversarial Learning",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Betti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Milano",
"location": {}
},
"email": "federico.betti@mail.polimi.it"
},
{
"first": "Giorgia",
"middle": [],
"last": "Ramponi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Milano",
"location": {}
},
"email": "giorgia.ramponi@polimi.it"
},
{
"first": "Massimo",
"middle": [],
"last": "Piccardi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Technology Sydney",
"location": {}
},
"email": "massimo.piccardi@uts.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, generative adversarial networks (GANs) have started to attain promising results also in natural language generation. However, the existing models have paid limited attention to the semantic coherence of the generated sentences. For this reason, in this paper we propose a novel network-the Controlled TExt generation Relational Memory GAN (CTERM-GAN)-that uses an external input to influence the coherence of sentence generation. The network is composed of three main components: a generator based on a Relational Memory conditioned on the external input; a syntactic discriminator which learns to discriminate between real and generated sentences; and a semantic discriminator which assesses the coherence with the external conditioning. Our experiments on six probing datasets have showed that the model has been able to achieve interesting results, retaining or improving the syntactic quality of the generated sentences while significantly improving their semantic coherence with the given input.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, generative adversarial networks (GANs) have started to attain promising results also in natural language generation. However, the existing models have paid limited attention to the semantic coherence of the generated sentences. For this reason, in this paper we propose a novel network-the Controlled TExt generation Relational Memory GAN (CTERM-GAN)-that uses an external input to influence the coherence of sentence generation. The network is composed of three main components: a generator based on a Relational Memory conditioned on the external input; a syntactic discriminator which learns to discriminate between real and generated sentences; and a semantic discriminator which assesses the coherence with the external conditioning. Our experiments on six probing datasets have showed that the model has been able to achieve interesting results, retaining or improving the syntactic quality of the generated sentences while significantly improving their semantic coherence with the given input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language generation (NLG) is gaining increasing attention in the NLP community thanks to its intriguing complexity and central role in many tasks and applications. Recently, generative adversarial networks (GANs) [7] have started to display promising performance also in NLG. GANs leverage a form of adversarial learning where a generator incrementally learns to generate realistic samples, while a discriminator simultaneously learns to discriminate between real and generated data. They had originally been proposed as a generative approach for continuous data, such as images, but have later found application also for discrete data, despite their well-known \"non-differentiability issue\". In fact, several GANs have recently been proposed for text generation [24, 16, 25] and have achieved encouraging results in comparison to comparable maximum likelihood approaches; in particular, RelGAN [16] has outperformed state-of-theart (SOTA) results.",
"cite_spans": [
{
"start": 221,
"end": 224,
"text": "[7]",
"ref_id": null
},
{
"start": 771,
"end": 775,
"text": "[24,",
"ref_id": "BIBREF23"
},
{
"start": 776,
"end": 779,
"text": "16,",
"ref_id": "BIBREF15"
},
{
"start": 780,
"end": 783,
"text": "25]",
"ref_id": "BIBREF24"
},
{
"start": 903,
"end": 907,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, an effective NLG model should enjoy two main properties: the syntactic correctness and the semantic coherence of the generated sentences. Although both these aspects are key for the usability of NLG models, often only the syntactic aspect is taken into account during training and evaluation. For this reason, in this paper we propose a new model -the Controlled TExt generation Relational Memory GAN (CTERM-GAN) -which explicitly takes into account both syntactic and semantic aspects. CTERM-GAN consists of the following main modules: 1) a generator based on a relational memory with self-attention conditioned on an external input; 2) a syntax discriminator which learns to discriminate between real and generated sentences based on syntactic correctness; and 3) a semantic discriminator trained to assess whether a sentence is coherent with the external conditioning. Like a conventional NLG GAN, this model is trained to generate syntactically-correct sentences; however, the inclusion of both a second discriminator and a generator influenced by an external input allows improving the coherence of the generated sentences. The experimental results in Section 4 show that the proposed model has been able to retain or increase syntactic accuracy while at the same time drastically improving semantic coherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using GANs for discrete data generation is still a developing research area. The two main research directions are along reinforcement learning (RL)based and reparametrization-based models. The former use RL algorithms to circumvent the nondifferentiability issue and include SeqGAN [24] and several other models [8, 13, 3] . The latter, instead, leverage continuous approximations of discrete sampling [25, 4, 16] . Recently, RelGAN [16] has introduced a Gumbel-softmax relaxation of discrete sampling [11] , alongside a multiple discriminator model to extract different features from the sentences. RelGAN has outperformed all other compared GAN models on a variety of challenging datasets. The idea of using external conditioning to improve or control NLG has also been widely explored [6, 22, 20, 21] . For instance, TopicRNN [6] increases the probability of words related to a control topic during sentence generation. SentiGAN [20] has proposed a model that generates sentences conditioned on a sentiment by using multiple generators, one per sentiment, and a multi-label discriminator. TCNLM [21] uses a neural topic model to first extract the latent topic, and then feeds it to a mixture of expert language models, each specialized for an individual topic. Differently from them, in our model we use a single generator that controls the text generation by means of a Relational Memory which has been exposed to the conditioning input. In turn, two distinct discriminators respectively assess the syntactic quality and coherence to the input of the generated sentences. Our model is independent of the specific nature of the conditioning input and, as such, it is the only one to date that can be used for both topic-conditioned and sentiment-conditioned generation, and, in principle, other flavors.",
"cite_spans": [
{
"start": 282,
"end": 286,
"text": "[24]",
"ref_id": "BIBREF23"
},
{
"start": 312,
"end": 315,
"text": "[8,",
"ref_id": "BIBREF7"
},
{
"start": 316,
"end": 319,
"text": "13,",
"ref_id": "BIBREF12"
},
{
"start": 320,
"end": 322,
"text": "3]",
"ref_id": "BIBREF2"
},
{
"start": 402,
"end": 406,
"text": "[25,",
"ref_id": "BIBREF24"
},
{
"start": 407,
"end": 409,
"text": "4,",
"ref_id": "BIBREF3"
},
{
"start": 410,
"end": 413,
"text": "16]",
"ref_id": "BIBREF15"
},
{
"start": 433,
"end": 437,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 502,
"end": 506,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 788,
"end": 791,
"text": "[6,",
"ref_id": "BIBREF5"
},
{
"start": 792,
"end": 795,
"text": "22,",
"ref_id": "BIBREF21"
},
{
"start": 796,
"end": 799,
"text": "20,",
"ref_id": "BIBREF19"
},
{
"start": 800,
"end": 803,
"text": "21]",
"ref_id": "BIBREF20"
},
{
"start": 829,
"end": 832,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 932,
"end": 936,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 1098,
"end": 1102,
"text": "[21]",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section presents the main details of CTERM-GAN (namely, the generator and the syntax and semantic discriminators). 1 ",
"cite_spans": [
{
"start": 120,
"end": 121,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "As training loss, we have used a non-saturating GAN loss function [7] , that, considering the doublediscriminator model, is extended as:",
"cite_spans": [
{
"start": 66,
"end": 69,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function",
"sec_num": "3.1"
},
{
"text": "lD = 1 m m X i=1 \uf8ff\u2713 log(D S (xr)) + log(1 D S (G(xz))) \u25c6 + \u2713 log(D T (xr)) + log(1 D T (G(xz))) \u25c6 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function",
"sec_num": "3.1"
},
{
"text": "All the training information and hyperparameters are described in Appendix D. We will release all our code publicly after the anonymity period. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function",
"sec_num": "3.1"
},
{
"text": "lG = 1 m m X i=1 \uf8ff log(D S (G(xz))) log(D T (G(xz)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function",
"sec_num": "3.1"
},
{
"text": "where is a hyperparameter that assigns a relative weight to the topic discriminator with respect to the syntax one. plays an important role during training since, if it is too low, the model ignores the conditioning due to the limited penalty. Conversely, a too high a value would give too much importance to the conditioning, affecting the quality of the generated sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss function",
"sec_num": "3.1"
},
{
"text": "The generator is based on a Relational Memory with self-attention [18, 16] . This model updates its \"internal values\" and produces its final output by selecting from its memory cells with a self-attention mechanism. Leveraging an idea similar to that of image-based conditional GANs [15] , we introduce an external conditioning into the generator. First, given the conditioning input c 2 R d , the model computes an embedding t for c using function",
"cite_spans": [
{
"start": 66,
"end": 70,
"text": "[18,",
"ref_id": "BIBREF17"
},
{
"start": 71,
"end": 74,
"text": "16]",
"ref_id": "BIBREF15"
},
{
"start": 283,
"end": 287,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "f \u2713 : R d ! R m , with m < d. Function f \u2713 has been",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "implemented using a feed-forward neural network with a self-attention layer. The conditioning vector c may originate from any type of different source as long as it remains consistent during the individual training. Depending on the required task, as shown in the experiment phase, it will change. This vector c is the only link between the conditioning and the generative model; its influence on the final output will be crucial for the conditioning of the generated sentence. f \u2713 has been adopted to give the model the ability to learn the best manipulation of the conditioning vector to insert into the memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "Given the previous output y t 1 and the processed conditioning t at the current time-step, the memory updates its state, h t , using a double-step sequential update and computes the next value, o t , as in Eqs. 1-2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = f \u2713 (c) (1) o t , h t = f RM (h t 1 , t , y t 1 )",
"eq_num": "(2)"
}
],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "where f RM represents the memory cell update. The distribution over the vocabulary of the next word is evaluated using the memory output o t as in Eq. 3 with a feed-forward layer. Then, the next soft word,\u0177 t , is sampled using the Gumbelsoftmax relaxation [11] with temperature T (Eq. 4). The temperature value greatly influences the quality-diversity trade-off; more details on these parameters are provided in Appendix D.",
"cite_spans": [
{
"start": 257,
"end": 261,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = f \u21b5 (o t ) (3) y t \u21e0 Gumbel-softmax(\u0233 t , T )",
"eq_num": "(4)"
}
],
"section": "Generator",
"sec_num": "3.2"
},
{
"text": "The syntax discriminator takes as input either a real sentence, r = (r 1 , . . . , r n ), or a generated one,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax discriminator",
"sec_num": "3.3"
},
{
"text": "g = (g 1 , . . . , g n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax discriminator",
"sec_num": "3.3"
},
{
"text": "Similarly to many other works (e.g., [9, 23] ), the discriminator first transforms its input into an embedding matrix. This embedding allows learning a transformation that condenses the information brought in by each word optimally for any given task. The syntax discriminator is then built using two convolutional layers with ReLU activation functions, followed by a self-attention layer, again followed by two other convolutional layers with ReLU activation functions. The selfattention layer is used to attend to the output of the previous convolutional layer and select the most useful features. The final layers generate the decision.",
"cite_spans": [
{
"start": 37,
"end": 40,
"text": "[9,",
"ref_id": "BIBREF8"
},
{
"start": 41,
"end": 44,
"text": "23]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax discriminator",
"sec_num": "3.3"
},
{
"text": "One of the main novelties of our approach is the explicit targeting of semantic coherence. This is achieved by augmenting the model with a semantic discriminator trained to recognize whether the input sentence is consistent with the conditioning input, c, or not. To produce its output, this discriminator receives as input both c and either a real sentence, r = (r 1 , . . . , r n ), or the output of the generator, g = (g 1 , . . . , g n ). The proposed architecture is composed of two networks: one for the sentence and one for input c. The first network consists of a feed-forward layer which acts as an embedding, followed by four convolutional and self-attention layers with ReLU activation functions which extract a latent vector expected to represent the main characteristics of the sentence. In the second network, input c is passed through a linear layer to suitably reduce or expand its size to that of the output vector of the first network. The two outputs are then combined and the final decision is computed with a feed-forward layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic discriminator",
"sec_num": "3.4"
},
{
"text": "The CTERM-GAN model has been tested over two tasks: topic conditioning and sentiment conditioning. The former consists of generation guided by exogenous text input, while the latter focuses on the generation of sentences given a sentiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In all experiments, we have separately trained the generator for 150 epochs and the semantic discriminator for 300 epochs before the adversarial training was started. After that, the generator has been trained for 2 batches and the discriminators for 3 batches in each adversarial epoch. For the weight in the loss function, several values were tested and the optimal value was found to be 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In these experiments we have compared the conditioned text generation of CTERM-GAN with that of the state-of-the-art adversarial architectures -Se-qGAN [24] , RelGAN [16] , and TGVAE [22] -and a classic auto-regressive LSTM language model with an initial conditioning, in terms of both syntactic and semantic quality. The main goal is to ensure a good quality for the generation by introducing a conditioning on the semantic of the sentence. In this task, the conditioning consists of the word distribution for a topic extracted from a sentence, either provided by the user or, as in our case, sampled from the dataset. Any type of topic model can be adopted: in our case, an LDA model [2] has been trained on a starting dataset in order to have a distribution of the topics covered within the corpus. The LDA model, both in training and in inference, given an input sentence, builds a distribution on the vocabulary. In turn, this distribution influences the model's sentence generation thanks to its inclusion in the generation process. Most likely, improving the quality of the topic extraction is likely to improve the final results of the model. Eventually, the extracted distribution is used as the condition- ing input, c, for the relational memory during the generation, as described in Section 3.",
"cite_spans": [
{
"start": 152,
"end": 156,
"text": "[24]",
"ref_id": "BIBREF23"
},
{
"start": 166,
"end": 170,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 183,
"end": 187,
"text": "[22]",
"ref_id": "BIBREF21"
},
{
"start": 686,
"end": 689,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "Datasets To evaluate the topic-conditioned text generation we have used four benchmark datasets: COCO Image Caption [5] , EMNLP2017 WMT News [9] , APNews 2 and BNC [1] . The COCO Image Caption dataset is composed of image captions that we have preprocessed following [26] . The EMNLP2017 WMT News dataset consists of longer sentences than COCO's that were also preprocessed according to [26] . APNews is a dataset of Associated Press' news articles from 2009 to 2016, and the BNC dataset is the written portion of the British National Corpus. These datasets are highly diverse in terms of type of texts, covering books, essays, journals and news. More specific information about these datasets are shown in Table 4 .",
"cite_spans": [
{
"start": 116,
"end": 119,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 141,
"end": 144,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 164,
"end": 167,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 267,
"end": 271,
"text": "[26]",
"ref_id": "BIBREF25"
},
{
"start": 387,
"end": 391,
"text": "[26]",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 707,
"end": 715,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "Evaluation As evaluation measures, we have adopted corpus BLEU [17] to assess syntactic quality and the Kullback-Leibler (KL) divergence [12] between the topic used for conditioning and the topic extracted from the generated sentence to assess semantic coherence. A low KL value means that the distribution inferred from the output of the model is similar to the one extracted from the conditioning input sentence and used as conditioning vector c. This implies that the semantic conditioning has been carried out successfully.",
"cite_spans": [
{
"start": 63,
"end": 67,
"text": "[17]",
"ref_id": "BIBREF16"
},
{
"start": 137,
"end": 141,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "Results Table 2 shows the results of the topicconditioning experiments over the four datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "2 https://www.ap.org/en-gb/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "The BLEU results (columns B-2, B-4) empirically demonstrate that the syntactic quality of the text generated by CTERM-GAN is often superior to that of the state-of-the-art GANs for text generation. The results for TGVAE are shown for both 10 topics, as used for CTERM-GAN , and for its best reported configuration. In turn, the KL results show that CTERM-GAN has also achieved better coherence to the conditioning topic than RelGAN for all datasets. For some datasets, the LSTM-based model has obtained a lower (i.e. better) divergence, yet at a considerable reduction in terms of BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic conditioning",
"sec_num": "4.1"
},
{
"text": "In these experiments we have compared the sentiment-conditioned text generation of CTERM-GAN with that of SeqGAN [24] , SentiGAN [20] and an RNNLM baseline [14] . Following the experiments carried out in [20] , the conditioning has been performed based on only two sentiments, positive or negative. Table 3 : Sentiment conditioning results. The values of models other than CTERM-GAN are reproduced from [20] .",
"cite_spans": [
{
"start": 113,
"end": 117,
"text": "[24]",
"ref_id": "BIBREF23"
},
{
"start": 129,
"end": 133,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 156,
"end": 160,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 204,
"end": 208,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 403,
"end": 407,
"text": "[20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentiment conditioning",
"sec_num": "4.2"
},
{
"text": "Dataset We have used two datasets, Movie Reviews (MR) [19] and Customer Reviews (CR) [10] , where individual sentences are annotated as either positive or negative. The Movie Reviews dataset consists of user reviews of movies, with 2, 133 positive and 2, 370 negative sentences. The Customer Reviews dataset consists of 1, 500 reviews of products sold online, with positive/negative annotation at sentence level. For this task, only sentences of length shorter than 15 words have been retained, to be able to use the same preprocessing as [20] .",
"cite_spans": [
{
"start": 54,
"end": 58,
"text": "[19]",
"ref_id": "BIBREF18"
},
{
"start": 85,
"end": 89,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 539,
"end": 543,
"text": "[20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment conditioning",
"sec_num": "4.2"
},
{
"text": "Evaluation For this task, we have classified the generated sentences in terms of their sentiment using a Bidirectional-LSTM as classifier. In addition, we have evaluated two quality metrics: 1) the novelty of each generated sentence (Eq. 5) using the definition from [20] , where JS is the Jaccard similarity and C j are the training set sentences. The novelty measures the diversity between the generated data and the training corpus; and 2) the diversity metric (Eq. 6), a measure of the model's ability to generate diverse sentences and avoid mode collapse.",
"cite_spans": [
{
"start": 267,
"end": 271,
"text": "[20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment conditioning",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Novelty(S i ) = 1 max{JS(S i , C j )} j=|C| j=1 (5) Diversity(S i ) = 1 max{JS(S i , S j )} j=|S|, j6 =i j=1",
"eq_num": "(6)"
}
],
"section": "Sentiment conditioning",
"sec_num": "4.2"
},
{
"text": "Results Table 3 shows the results from the sentiment-conditioned experiments. CTERM-GAN has been able to achieve a remarkable performance trade-off, with the best sentiment classification accuracy and diversity over the CR dataset, and the same sentiment classification accuracy as SentiGAN (k=2) on the MR dataset, yet with significantly decreased novelty and diversity. While the performance of CTERM-GAN and SentiGAN may be regarded as comparable overall, we emphasize once again that the proposed model is not specialized for sentiment conditioning or any specific types of conditioning.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentiment conditioning",
"sec_num": "4.2"
},
{
"text": "In this short paper we have proposed the Controlled TExt generation Relational Memory GAN (CTERM-GAN), a model aiming to generate sentences that are both syntactically correct and semantically coherent. The proposed model leverages a Relational Memory that is influenced by a conditioning input and is used to generate sentences, alongside two discriminators that respectively assess the sentences' syntactic quality and semantic coherence. The experimental results over topicconditioned and sentiment-conditioned tasks have shown that the proposed model has performed at the same level or above that of SOTA GANs and relevant baselines. In the near future, we will explore text generation with other type of conditioning inputs such as writer's style and images to further probe the generality of the proposed model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We thank Professor Stefano Ceri for the support, and the valuable comments and ideas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The British National Corpus, version 3 (bnc xml edition)",
"authors": [],
"year": 2007,
"venue": "Distributed by Bodleian Libraries, University of Oxford, on behalf of the BNC Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The British National Corpus, version 3 (bnc xml edi- tion). In Distributed by Bodleian Libraries, Univer- sity of Oxford, on behalf of the BNC Consortium., 2007.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Jordan. Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jor- dan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993-1022, March 2003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Yangqiu Song, and Yoshua Bengio. Maximum-likelihood augmented discrete generative adversarial networks",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Tong Che",
"suffix": ""
},
{
"first": "Ruixiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Devon",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Hjelm",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.07983"
]
},
"num": null,
"urls": [],
"raw_text": "Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Ben- gio. Maximum-likelihood augmented discrete gen- erative adversarial networks. arXiv:1702.07983, 2017.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adversarial text generation via feature-mover's distance",
"authors": [
{
"first": "Liqun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shuyang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Chenyang",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Haichao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.06297"
]
},
"num": null,
"urls": [],
"raw_text": "Liqun Chen, Shuyang Dai, Chenyang Tao, Dinghan Shen, Zhe Gan, Haichao Zhang, Yizhe Zhang, and Lawrence Carin. Adversarial text generation via feature-mover's distance. arXiv:1809.06297, 2018.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Microsoft COCO captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.00325"
]
},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv:1504.00325, 2015.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TopicRNN: a recurrent neural network with long-range semantic dependency",
"authors": [
{
"first": "B",
"middle": [],
"last": "Adji",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Dieng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paisley",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01702"
]
},
"num": null,
"urls": [],
"raw_text": "Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: a recurrent neural net- work with long-range semantic dependency. arXiv: 1611.01702, 2016.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long Text Generation via",
"authors": [
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Adversarial Training with Leaked Information",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.08624"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. Long Text Generation via Adversarial Training with Leaked Information. arXiv:1709.08624, 2017.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long text generation via adversarial training with leaked information",
"authors": [
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. Long text generation via adver- sarial training with leaked information. In Thirty- Second AAAI Conference on Artificial Intelligence, 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. Mining and summariz- ing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA, 2004. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Categorical reparameterization with gumbel-softmax",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Shixiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01144"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. Cat- egorical reparameterization with gumbel-softmax. arXiv:1611.01144, 2016.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On information and sufficiency",
"authors": [
{
"first": "Solomon",
"middle": [],
"last": "Kullback",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leibler",
"suffix": ""
}
],
"year": 1951,
"venue": "The Annals of Mathematical Statistics",
"volume": "22",
"issue": "1",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solomon Kullback and Richard A Leibler. On in- formation and sufficiency. The Annals of Mathemat- ical Statistics, 22(1):79-86, 1951.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adversarial ranking for language generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhengyou",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. Adversarial ranking for language generation. CoRR, abs/1705.11001, 2017.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recurrent neural network based language model. INTER-SPEECH",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafiat",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernocky",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. INTER- SPEECH 2010, 2010.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conditional generative adversarial nets",
"authors": [
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Osindero",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1411.1784"
]
},
"num": null,
"urls": [],
"raw_text": "Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "RelGAN: Relational Generative Adversarial Networks for Text Generation. ICLR 2019",
"authors": [
{
"first": "Weili",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Narodytska",
"suffix": ""
},
{
"first": "Ankit",
"middle": [
"B"
],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weili Nie, Nina Narodytska, and Ankit B. Pa- tel. RelGAN: Relational Generative Adversarial Networks for Text Generation. ICLR 2019, pages 1-20, 2019.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th ACL, pages 311-318. Association for Compu- tational Linguistics, 2002.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Faulkner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Raposo",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Rae",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chrzanowski",
"suffix": ""
},
{
"first": "Th\u00e9ophane",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Th\u00e9ophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural net- works. 2018.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP 2013",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for se- mantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631-1642, 2013.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4446--4452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Wang and Xiaojun Wan. SentiGAN: Generat- ing Sentimental Texts via Mixture Adversarial Net- works. In Proceedings of the Twenty-Seventh Inter- national Joint Conference on Artificial Intelligence, pages 4446-4452, 2018.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Topic Compositional Neural Language Model",
"authors": [
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jiaji",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Ping",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.09783"
]
},
"num": null,
"urls": [],
"raw_text": "Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. Topic Compositional Neural Lan- guage Model. arXiv:1712.09783, 2018.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Topic-guided variational autoencoders for text generation. CoRR, abs",
"authors": [
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Hongteng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruiyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 1903,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. Topic-guided vari- ational autoencoders for text generation. CoRR, abs/1903.07137, 2019.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sequence Generative Adversarial Nets with Policy Gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seqgan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.05473"
]
},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. arXiv:1609.05473, 2016.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SeqGAN: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2852--2858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, pages 2852-2858, 2017.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial feature matching for text generation",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "4006--4015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ri- cardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial feature matching for text generation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4006-4015. JMLR. org, 2017.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation mod- els. CoRR, abs/1802.01886, 2018.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "CTERM-GAN architecture",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Topic conditioning results. The values for TGVAE are reproduced from[22] where KL values are not available.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}