ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
84.7 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:07.267490Z"
},
"title": "RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation",
"authors": [
{
"first": "Micha\u0142",
"middle": [],
"last": "Bie\u0144",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Micha\u0142",
"middle": [],
"last": "Gilski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Martyna",
"middle": [],
"last": "Maciejewska",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Taisner",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dawid",
"middle": [],
"last": "Wi\u015bniewski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "\u0141awrynowicz",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semi-structured text generation is a non-trivial problem. Although last years have brought lots of improvements in natural language generation, thanks to the development of neural models trained on large scale datasets, these approaches still struggle with producing structured, context-and commonsense-aware texts. Moreover, it is not clear how to evaluate the quality of generated texts. To address these problems, we introduce RecipeNLG-a novel dataset of cooking recipes. We discuss the data collection process and the relation between the semi-structured texts and cooking recipes. We use the dataset to approach the problem of generating recipes. Finally, we make use of multiple metrics to evaluate the generated recipes.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Semi-structured text generation is a non-trivial problem. Although last years have brought lots of improvements in natural language generation, thanks to the development of neural models trained on large scale datasets, these approaches still struggle with producing structured, context-and commonsense-aware texts. Moreover, it is not clear how to evaluate the quality of generated texts. To address these problems, we introduce RecipeNLG-a novel dataset of cooking recipes. We discuss the data collection process and the relation between the semi-structured texts and cooking recipes. We use the dataset to approach the problem of generating recipes. Finally, we make use of multiple metrics to evaluate the generated recipes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A cooking recipe is a very specific category of text, that facilitates sharing culinary ideas between people and provides algorithms for food preparation. Although the recipes follow a set of informal rules which make the cooking experience understandable and reproducible (Fisher, 1969) , there are no strict rules on how this text should be structured. This makes it hard to estimate the recipe quality using any objective measures.",
"cite_spans": [
{
"start": 273,
"end": 287,
"text": "(Fisher, 1969)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, we have noticed a major growth of interest in using cooking recipes datasets for performing deep learning experiments. In particular, there is a number of interesting endeavors utilizing computer vision for finding (Salvador et al., 2017) or even generating cooking recipes matching the input food image. One of the results was the publication of the Recipe1M+ (Salvador et al., 2017) (Marin et al., 2019) dataset containing both recipes and images. This dataset, which was the largest publicly available recipes dataset at the time, boosted research in this area.",
"cite_spans": [
{
"start": 225,
"end": 248,
"text": "(Salvador et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 371,
"end": 394,
"text": "(Salvador et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 395,
"end": 415,
"text": "(Marin et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, while the demand is still emerging, there is currently no large scale cooking dataset tailored specifically for NLP tasks. The existing resources are either not sufficiently big to make efficient use of state of the art language models, or were created with computer vision in mind. In our work, we propose a novel dataset that builds on that previous work and resources. We hope that this resource, which is currently the largest cooking recipes dataset publicly available, may further empower research in the area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is composed of three parts. In Section 3 we outline the problem of imitating cooking recipes and their structure. We show the limitations that caused us to recognize the existing resources as insufficient for generating complete cooking recipes. In Section 4, we introduce a novel recipes dataset built for semi-structured (Buneman, 1997) text generation, which contains over 2 million recipes. We present detailed information about the process of data gathering, deduplication, and cleansing. Finally, in Section 5 we present the implementation details and results of our experiment. We make use of a Named Entity Recognizer (NER) to extract food entities from the dataset and provide them as an input for the recipe generator, using special control tokens. This data is used to fine-tune a GPT-2 (Radford et al., 2019) language model which generates new recipes based on the given list of food entities. We use a number of evaluation methods to compare the generated output to the real recipes using the same set of food entities.",
"cite_spans": [
{
"start": 333,
"end": 348,
"text": "(Buneman, 1997)",
"ref_id": "BIBREF1"
},
{
"start": 808,
"end": 830,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our work introduces RecipeNLG 1 -the novel dataset of cooking recipes, along with the language generation task based on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dissemination of artificial neural network architectures like GPT-2 (Radford et al., 2019) , BERT (Devlin et al., 2019) or LSTM (Hochreiter and Schmidhuber, 1997) , (Merity et al., 2017) vance the field of text generation. Recent developments in neural network architectures (Krizhevsky et al., 2012) , (Liang and Hu, 2015) have enabled images to text conversion and vice versa. Publishing Recipe1M+ dataset (Salvador et al., 2017) made it reasonable to utilize deep neural networks and initiated a series of new publications. (Marin et al., 2019) combined the Recipe1M+ dataset with 13 million food images to generate joint embeddings of recipes and images. Their goal was to maximize the coherence of the generated text with its corresponding image. (Bossard et al., 2014) recognized and classified food images into 101 food categories, utilizing a dataset consisting of approximately 100K images. used the Recipe1M+ to generate simplified recipes lacking ingredient quantities and units. They evaluated their model using a perplexity score as well as the adequacy between the generated text and the image.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 93,
"end": 119,
"text": "BERT (Devlin et al., 2019)",
"ref_id": null
},
{
"start": 128,
"end": 162,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 165,
"end": 186,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 275,
"end": 300,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 303,
"end": 323,
"text": "(Liang and Hu, 2015)",
"ref_id": "BIBREF10"
},
{
"start": 408,
"end": 431,
"text": "(Salvador et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 527,
"end": 547,
"text": "(Marin et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 752,
"end": 774,
"text": "(Bossard et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "A number of efforts are underway to utilize neural language models on recipes datasets. (Parvez et al., 2018) used a dataset of 100K recipes to build an LSTM-based discriminative language model for the task of named entity recognition. They utilized a cooking recipes dataset for evaluation. (Yang et al., 2017 ) used a dataset with 31K recipes to propose reference-aware language models to generate instructions based on the ingredients provided. (Kiddon et al., 2016) presented a recurrent neural network that models global coherence. It was used to generate individual instructions based on the title and the list of ingredients. They utilized a dataset with 150K cooking recipes for model evaluation. (Yagcioglu et al., 2018 ) published a dataset consisting of approximately 20K recipes to generate question-answer pairs. (Chandu et al., 2019 ) built a custom dataset of food images and made use of the text to image approach to perform a storyboarding task for each recipe step. (Luis Herranz and Jiang, 2018) surveyed different approaches to the problem of food recognition and recipe analysis. They published a list of datasets, reported in the literature and their characteristics.",
"cite_spans": [
{
"start": 292,
"end": 310,
"text": "(Yang et al., 2017",
"ref_id": "BIBREF24"
},
{
"start": 448,
"end": 469,
"text": "(Kiddon et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 705,
"end": 728,
"text": "(Yagcioglu et al., 2018",
"ref_id": "BIBREF23"
},
{
"start": 826,
"end": 846,
"text": "(Chandu et al., 2019",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "(Majumder et al., 2019) proposed the task of personalized recipe generation, and have shared a dataset of 180K recipes and 700K user interactions (reviews). The authors used an encoder-decoder framework to generate recipes and conducted an evaluation using text metrics. They encoded three embedding layers: title, ingredient, and caloriclevel using BERT then decoded recipes steps using a two-layered GRU. (Lee et al., 2020) have recently presented demo paper of their system for the automatic generation of cooking recipes utilizing the Recipe1M+ dataset and a language model. The evaluation of the model was based on translation metrics. They focused on two separate tasks: ingredients, and instructions generation. On the contrary, we use prepared food entities (see Section 5.1) to generate complete recipes, which allows pairwise comparison of the original and generated recipe composed of the same set of ingredients.",
"cite_spans": [
{
"start": 407,
"end": 425,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We also propose a new task of generating full recipes with quantities and units. We publish a carefully prepared RecipeNLG dataset containing both recipes and tagged food entities, to ease the process of generating and evaluating recipes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Cooking recipes have a specific format which consists of: a title, a list of ingredients with given amounts, and the instructions in a step by step format. The shortest part of the recipe, the title, should accurately name it and summarize its content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "The ingredients list has to contain entities consisting of the quantity, unit name, and ingredient name. The quantities of all ingredients have to be in line with the number of servings the recipe is made for. The unit name has to be in relation to the quantity. It must be appropriate to the ingredient form (liquid, dry countable, dry uncountable).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "Finally, all the units in the recipe are expected to follow a single unit system -imperial or metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "The instructions section needs to accurately present the order of steps. The actions performed on every ingredient have to be taken into account in the following recipe steps, which should reflect the state of the ingredient after the given action. All the ingredients from list should be used, and their usage quantities match those given on the ingredient list. Finally, some recipes use references to a step number of prior actions, which makes the step dependent on other steps and their ordinal numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "We considered using the Recipe1M+ dataset for our task, but it became clear that it has certain limitations regarding the validity of the recipe structure. To investigate these issues, we prepared a set of corresponding recipes built of 350, 141 pairs of recipes, identified by the same URL. This implies that both recipes in the pair, originated in the same place. They are considered a duplicate, despite not having exactly the same content. Example differences in content, as a result of different processing techniques is presented in Table 1 . The set of corresponding recipes can be divided into two subsets -Recipe1M+ subset (R s ) and Gathered subset (G s ).",
"cite_spans": [],
"ref_spans": [
{
"start": 539,
"end": 546,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "During the data exploration process, we noticed that the number of instructions in the corresponding recipes varies, usually it is larger in R s . To explain this difference, we manually verified 100 randomly selected pairs of corresponding recipes and found that 34 of them had the same structure, and in 62 cases recipes from R s were malformed -had more steps than the original ones, while recipes from G s kept the original structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "We discovered that the recipe instructions in the Recipe1M+ dataset might have been segmented into sentences instead of actual steps (see Table 1 ). To find out whether this explanation is correct, we split recipe instructions form G s into sentences. The distribution of the number of obtained sentences in the recipe is similar to the R s instructions distribution, which indicates that R s recipes structure might have been altered. As our efforts aimed at generating semi-structured text, any changes in the structure of the documents are not acceptable.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "Another issue we encountered during the data exploration, is the absence or malformation of fractions which we observed in Recipe1M+ (see Table 1). We manually checked the same randomly selected 100 pairs and found, that 79 recipes from the R s dataset missed at least one fraction from the set of ingredients, while the recipes from G s were correctly reflecting the actual fractions in all cases. Furthermore, we found that the total number of recipes that had zero fractions was five times greater in R s than in G s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "Distortion of the fractions in this scale makes quantitative analyses pointless. Moreover, the text generator trained on this data would be unable to create logically coherent lists of ingredients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes as datasets",
"sec_num": "3"
},
{
"text": "The results presented in Section 3 indicate the need for an enhanced dataset, appropriate for semi-structured text generation. We prepared a novel dataset named RecipeNLG, built on top of Recipe1M+, but enhanced with new and corrected records. Additional recipes were gathered from multiple cooking web pages, using automated scripts in a web scraping process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RecipeNLG dataset",
"sec_num": "4"
},
{
"text": "During the exploratory data analysis multiple problems regarding the structure of recipes were found and corrected. Recipes without any ingredients or instructions were considered to be extraction errors and were removed. We removed the excessive whitespace characters and replaced unicode symbols, (e.g., fractions) with their ASCII equivalents. Finally, the Recipe1M+ dataset was appended to the gathered data. The RecipeNLG dataset contains an additional column, that identifies the origin of each record -Recipe1M+ or Gathered data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "Deduplication was required to ensure that records do not overlap in the resulting set of the recipes. We began with finding duplicated recipes identified by the same URL -recipes downloaded from the same source are supposed to be identical. Then, pairs consisting of the same sequence of characters in instructions and ingredients were detected and removed. Finally, we found and removed near matches. The cosine similarity score was calculated pairwise upon a TF-IDF representation of the recipe ingredients and instructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "Based on the corresponding recipes set (Section 3), we have determined the value of a duplication threshold as the minimum value of cosine similarity, starting from which a pair of records is considered to be a duplicate, by comparing the set of known duplicates with the set of candidate duplicates for each threshold value (Figure 2 ). For the duplication threshold, we chose the value where \u2022 (Some may elect to keep the spices; the recipe will still turn out but will have a different flavor than intended. \u2022 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 334,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Combine dressing, lime juice, and honey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Marinate the chicken tenders in this mixture for at least one hour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Grill chicken to a lightly golden color.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Drain and discard spices from the Italian dressing. (Some may elect to keep the spices; the recipe will still turn out but will have a different flavor than intended.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Combine dressing, lime juice, and honey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Marinate the chicken tenders in this mixture for at least one hour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "\u2022 Grill chicken to a lightly golden color. the F1 score was the highest, which is 0.92. During the deduplication 523,040 records were removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "We filtered out recipes in languages other than English. To recognize language of the recipe, we used only instructions, since foreign names (e.g., croissant) are common in titles and ingredients names, and may mislead the classifier. We used Google Translate API for language detection task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset cleansing",
"sec_num": "4.1"
},
{
"text": "The RecipeNLG dataset contains 2, 231, 142 distinct cooking recipes and to the best of our knowledge, it is the largest available dataset in the domain. Figure 3 presents distributions of the number of elements in instructions, it visualizes the trend described in Section 3. This suggests, that recipes in RecipeNLG are more likely to have a structure consistent with the original recipes. ",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "RecipeNLG metrics",
"sec_num": "4.2"
},
{
"text": "We present our experiment performed on the RecipeNLG dataset. The goal was to prepare a model, which makes use of food entities to generate a complete cooking recipe. To accomplish this task, we prepared a NER model for identifying and extracting food entities. A GPT-2 model was fine-tuned for the recipe generation. The generated recipes were compared against the original recipes, using automatic evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "To use the NER for this problem, it was necessary to teach it what ingredients are. In order to determine the collection of ingredients, a subset of 500 recipes was manually annotated. This training data allowed us to extract food entities from the rest of the dataset. In total, the chosen recipes contained about 2,400 individual ingredients. We created the penalty metric to evaluate how precisely the model extracts a food entity (set of tokensT ) from an ingredient, based on a test set (set of tokens T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying food entities",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "penalty(T , T ) = \uf8f1 \uf8f2 \uf8f3 0 ifT = T 0.5 ifT \u2282 T 1 ifT \u2229 T = \u2205",
"eq_num": "(1)"
}
],
"section": "Identifying food entities",
"sec_num": "5.1"
},
{
"text": "Since we allow partial matching of the result and the classified ingredient, we decided not to use standard metrics, such as precision and recall to evaluate NER performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying food entities",
"sec_num": "5.1"
},
{
"text": "As a proof of concept for the usage of our dataset, we have created a language model based on the Hugging Face (Wolf et al., 2019) implementation of the pretrained GPT-2 (Radford et al., 2019) . Before training, we performed several postprocessing operations on the dataset to ensure it is ready for our use case. It was crucial to create a model that generates \"rich\", extensive recipes. We decided to remove recipes with very short titles or instructions sections. We also removed recipes which contain phrases: 'step' in instructions, to remove the possibility of cross-step references based on ordinal numbers, and 'mix all', which lead the model to a preference of mixing everything over preparing detailed instructions.",
"cite_spans": [
{
"start": 111,
"end": 130,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 170,
"end": 192,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating recipes from food entities",
"sec_num": "5.2"
},
{
"text": "The model was given a set of food entities and ordered to generate full recipes. A set of control tokens (visible on Figure 1 ) was prepared and embedded in the dataset. This has allowed the model to understand the recipe's underlying structure. Both the original recipes and the extracted food entities were used to prepare the training input. We placed multiple tokenized recipes into one context to speed up the training process. If the training sample was still shorter than the required size, the remaining space was filled with end of recipe tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating recipes from food entities",
"sec_num": "5.2"
},
{
"text": "We selected a set of 100 recipes that were not used in training, to form a gold standard. Based on the food entities of each record from the gold standard 10 recipes were generated using two models: one trained on RecipeNLG, and one trained on Recipe1M+ dataset. This resulted in 2000 generated recipes used to evaluate these two models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.3"
},
{
"text": "Firstly, we used cosine similarity calculated upon TF-IDF representation to measure the similarity of a generated recipe and its gold standard counterpart. The results have shown that a RecipeNLG model generates recipes more similar to the gold standard than the Recipe1M+ model (0.666 and 0.589 average cosine similarity, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.3"
},
{
"text": "We used the LanguageCheck spell and grammar checker to calculate the amount of linguistic mistakes -a metric that allowed us to estimate the overall performance of the model, and is applicable for a variety of texts. We calculated the average number of errors per recipe. There were fewer errors in the RecipeNLG model (2.78) than in the Recipe1M+ (7.35). Interestingly, the RecipeNLG model scored better than the gold standard (3.64).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.3"
},
{
"text": "The last approach to the evaluation was the utilization of translation metrics. We used three common ones: BLEU (Papineni et al., 2002) , GLEU (Wu et al., 2016) , and WER (Word Error Rate). Scores achieved by each set are outlined in Table 2 . The model trained on our dataset scored better on all of the translation metrics.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 143,
"end": 160,
"text": "(Wu et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.3"
},
{
"text": "While the RecipeNLG dataset is based on the Recipe1M+ dataset, it greatly expands the number of recipes available. What is even more important, the dataset comes with a changed scope -we didn't follow the idea of linking cooking recipes with their images, putting emphasis on the recipe text, structure and underlying logic. The new dataset provides over 1 million new, preprocessed and deduplicated recipes on top of the Recipe1M+ dataset. To the best of our knowledge, it is the largest publicly available dataset in the domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & Future work",
"sec_num": "6"
},
{
"text": "Our dataset, contrary to Recipe1M+, preserves unmodified ingredients quantities. It creates an opportunity to evaluate if the quantities are correctly generated by the model. In the future works, it could allow their normalization to a specific amount of servings. Another interesting potential work is on unification of mostly ambiguous units (e.g. cups, pinch) with regards to the item they are describing, which could have many uses in and outside of the culinary world, and further unification using knowledge graphs (Lawrynowicz, 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & Future work",
"sec_num": "6"
},
{
"text": "The challenges we faced can be generalized to the other examples of text generation tasks. Therefore, we make this dataset public, expecting that it could enable new research in the area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & Future work",
"sec_num": "6"
},
{
"text": "recipenlg.cs.put.poznan.pl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Model prototyping was supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC) programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Food-101 -mining discriminative components with random forests",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Bossard",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Guillaumin",
"suffix": ""
},
{
"first": "Luc",
"middle": [],
"last": "Van Gool",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision -ECCV 2014",
"volume": "",
"issue": "",
"pages": "446--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. 2014. Food-101 -mining discriminative components with random forests. In Computer Vi- sion -ECCV 2014, pages 446-461, Cham. Springer International Publishing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semistructured data",
"authors": [
{
"first": "P",
"middle": [],
"last": "Buneman",
"suffix": ""
}
],
"year": 1997,
"venue": "PODS '97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Buneman. 1997. Semistructured data. In PODS '97.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Storyboarding of recipes: Grounded contextual generation",
"authors": [
{
"first": "Khyathi",
"middle": [],
"last": "Chandu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6040--6046",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1606"
]
},
"num": null,
"urls": [],
"raw_text": "Khyathi Chandu, Eric Nyberg, and Alan W Black. 2019. Storyboarding of recipes: Grounded contex- tual generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 6040-6046, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Anatomy of a Recipe",
"authors": [
{
"first": "M",
"middle": [
"F K"
],
"last": "Fisher",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. F. K. Fisher. 1969. The Anatomy of a Recipe, With Bold Knife and Fork. Counterpoint.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "",
"pages": "1735--80",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735- 80.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Globally coherent text generation with neural checklist models",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Kiddon",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "329--339",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1032"
]
},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 329-339, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Creative AI: A new avenue for the Semantic Web? Semantic Web",
"authors": [],
"year": null,
"venue": "",
"volume": "11",
"issue": "",
"pages": "69--78",
"other_ids": {
"DOI": [
"10.3233/SW-190377"
]
},
"num": null,
"urls": [],
"raw_text": "Agnieszka Lawrynowicz. 2020. Creative AI: A new avenue for the Semantic Web? Semantic Web, 11(1):69-78.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "RecipeGPT: Generative pretraining based cooking recipe generation and evaluation system",
"authors": [
{
"first": "Helena",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Palakorn",
"middle": [],
"last": "Achananuparp",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Philips Kokoh Prasetyo",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lav",
"middle": [
"R"
],
"last": "Lim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Varshney",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena H. Lee, Ke Shu, Palakorn Achananuparp, Philips Kokoh Prasetyo, Yue Liu, Ee-Peng Lim, and Lav R. Varshney. 2020. RecipeGPT: Generative pre- training based cooking recipe generation and evalua- tion system. In Companion Proceedings of the Web Conference 2020.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recurrent convolutional neural network for object recognition",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Xiaolin",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Liang and Xiaolin Hu. 2015. Recurrent convolu- tional neural network for object recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Food recognition and recipe analysis: integrating visual content, context and external knowledge",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Weiqing Min",
"suffix": ""
},
{
"first": "Shuqiang",
"middle": [],
"last": "Herranz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiqing Min Luis Herranz and Shuqiang Jiang. 2018. Food recognition and recipe analysis: integrating vi- sual content, context and external knowledge. ArXiv, abs/1801.07239v1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generating personalized recipes from historical user preferences",
"authors": [
{
"first": "Shuyang",
"middle": [],
"last": "Bodhisattwa Prasad Majumder",
"suffix": ""
},
{
"first": "Jianmo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "5975--5981",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian McAuley. 2019. Generating personalized recipes from historical user preferences. In EMNLP, pages 5975-5981.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recipe1m+: A dataset for learning cross-modal embeddings for cooking recipes and food images",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Marin",
"suffix": ""
},
{
"first": "Aritro",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "Ferda",
"middle": [],
"last": "Ofli",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Hynes",
"suffix": ""
},
{
"first": "Amaia",
"middle": [],
"last": "Salvador",
"suffix": ""
},
{
"first": "Yusuf",
"middle": [],
"last": "Aytar",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. Recipe1m+: A dataset for learning cross-modal embeddings for cooking recipes and food images. IEEE Trans. Pattern Anal. Mach. Intell.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Regularizing and optimizing LSTM language models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimiz- ing LSTM language models. ArXiv preprint ArXiv:1708.02182.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 311-318.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building language models for text with named entities",
"authors": [
{
"first": "Saikat",
"middle": [],
"last": "Md Rizwan Parvez",
"suffix": ""
},
{
"first": "Baishakhi",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2373--2383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Rizwan Parvez, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2018. Building language mod- els for text with named entities. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2373-2383. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Inverse cooking: Recipe generation from food images",
"authors": [
{
"first": "Amaia",
"middle": [],
"last": "Salvador",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Drozdzal",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Gir\u00f3",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "10445--10454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amaia Salvador, Michal Drozdzal, Xavier Gir\u00f3, and Adriana Romero. 2019. Inverse cooking: Recipe generation from food images. 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 10445-10454.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning cross-modal embeddings for cooking recipes and food images",
"authors": [
{
"first": "Amaia",
"middle": [],
"last": "Salvador",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Hynes",
"suffix": ""
},
{
"first": "Yusuf",
"middle": [],
"last": "Aytar",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Mar\u00edn",
"suffix": ""
},
{
"first": "Ferda",
"middle": [],
"last": "Ofli",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "3068--3076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Mar\u00edn, Ferda Ofli, Ingmar Weber, and Antonio Tor- ralba. 2017. Learning cross-modal embeddings for cooking recipes and food images. 2017 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 3068-3076.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "George",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Rudnick",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neu- ral machine translation system: Bridging the gap between human and machine translation. ArXiv, abs/1609.08144.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "RecipeQA: A challenge dataset for multimodal comprehension of cooking recipes",
"authors": [
{
"first": "Semih",
"middle": [],
"last": "Yagcioglu",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Erdem",
"suffix": ""
},
{
"first": "Erkut",
"middle": [],
"last": "Erdem",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Ikizler-Cinbis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1358--1368",
"other_ids": {
"DOI": [
"10.18653/v1/d18-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Na- zli Ikizler-Cinbis. 2018. RecipeQA: A challenge dataset for multimodal comprehension of cooking recipes. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP (2018), pages 1358-1368.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Reference-aware language models",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1850--1859",
"other_ids": {
"DOI": [
"10.18653/v1/d17-1197"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 1850-1859.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Concept schema of the semi-structured text evaluation pipeline.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Cosine similarity threshold value selection for a dataset deduplication task.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Comparison of number of lines of instructions between datasets. Triangles denote mean values.",
"num": null,
"uris": null
},
"TABREF0": {
"text": "allowed to ad-Web scraped and acquired recipes ingr: [\"3/4 lbs. lean beef\", ...], instr: [\"Combine all ingredients.\" ...], title: \"Spicy Stuffed Peppers\"",
"content": "<table><tr><td colspan=\"2\">Constructing dataset</td></tr><tr><td/><td/><td>Extracting food entites</td><td>input: [ \"beef\", ... ]</td></tr><tr><td colspan=\"2\">Adding control tokens to form a plain text input</td><td>Adding NER result</td></tr><tr><td colspan=\"3\">&lt;RECIPE_START&gt;&lt;INPUT_START&gt; beef &lt;NEXT_INPUT&gt; ...</td></tr><tr><td colspan=\"3\">&lt;INPUT_END&gt; &lt;INGR_START&gt; 3/4 lbs. lean beef</td></tr><tr><td colspan=\"3\">&lt;NEXT_INGR&gt; ...&lt;INGR_END&gt; &lt;INSTR_START&gt; Combine all</td></tr><tr><td colspan=\"3\">ingredients.&lt;NEXT_INSTR&gt; ... &lt;INSTR_END&gt; &lt;TITLE_START&gt;</td></tr><tr><td colspan=\"3\">Spicy Stuffed Peppers &lt;TITLE_END&gt;&lt;RECIPE_END&gt;</td></tr><tr><td>Tokenizing</td><td colspan=\"2\">Training the</td></tr><tr><td/><td>model</td></tr><tr><td>[50265, 50267, 12023...]</td><td/><td>GPT2</td></tr><tr><td/><td colspan=\"2\">Generation</td></tr><tr><td/><td>Starter</td></tr><tr><td>Test set</td><td/><td>Generated</td></tr><tr><td/><td/><td>Recipes</td></tr><tr><td colspan=\"3\">Evaluating recipes. Original vs generated</td></tr><tr><td colspan=\"3\">with the same input string</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"text": "Classic Chicken Tenderloin from www.food.com/recipe/classic-chicken-tenderloin-410132 Drain and discard spices from the Italian dressing.",
"content": "<table><tr><td>Recipe1M+</td><td>RecipeNLG</td></tr><tr><td>Ingredients missing slash character:</td><td>Valid ingredients:</td></tr><tr><td>\u2022 1 lb chicken breast tenders</td><td>\u2022 1 lb chicken breast tenders</td></tr><tr><td>\u2022 12 cup Italian dressing</td><td>\u2022 1/2 cup Italian dressing</td></tr><tr><td>\u2022 1 teaspoon fresh lime juice</td><td>\u2022 1 teaspoon fresh lime juice</td></tr><tr><td>\u2022 1 12 teaspoons honey</td><td>\u2022 1 1/2 teaspoons honey</td></tr><tr><td>Directions split into phrases:</td><td>Valid directions split:</td></tr><tr><td>\u2022</td><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"text": "Comparison of two different representations of the same recipe",
"content": "<table><tr><td/><td>1.0</td></tr><tr><td/><td>0.8</td></tr><tr><td>Value</td><td>0.6</td></tr><tr><td/><td/><td>Precision</td></tr><tr><td/><td>0.4</td><td>Recall</td></tr><tr><td/><td/><td>F1 score</td></tr><tr><td/><td/><td>Threshold=0.92</td></tr><tr><td/><td>0.2</td></tr><tr><td/><td/><td>0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00</td></tr><tr><td/><td/><td>Treshold</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "Results of machine translation metrics for GPT-2 models fine-tuned on different datasets.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}