{ "paper_id": "I13-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:53.972999Z" }, "title": "Towards Contextual Healthiness Classification of Food Items -A Linguistic Approach", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "", "affiliation": { "laboratory": "", "institution": "Spoken Language Systems Saarland University", "location": { "postCode": "D-66123", "settlement": "Saarbr\u00fccken", "country": "Germany" } }, "email": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "", "affiliation": { "laboratory": "", "institution": "Spoken Language Systems Saarland University", "location": { "postCode": "D-66123", "settlement": "Saarbr\u00fccken", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We explore the feasibility of contextual healthiness classification of food items. We present a detailed analysis of the linguistic phenomena that need to be taken into consideration for this task based on a specially annotated corpus extracted from web forum entries. For automatic classification, we compare a supervised classifier and rule-based classification. Beyond linguistically motivated features that include sentiment information we also consider the prior healthiness of food items.", "pdf_parse": { "paper_id": "I13-1003", "_pdf_hash": "", "abstract": [ { "text": "We explore the feasibility of contextual healthiness classification of food items. We present a detailed analysis of the linguistic phenomena that need to be taken into consideration for this task based on a specially annotated corpus extracted from web forum entries. For automatic classification, we compare a supervised classifier and rule-based classification. Beyond linguistically motivated features that include sentiment information we also consider the prior healthiness of food items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Food plays a substantial part in each of our lives. With the growing health awareness in many parts of the population, there is consequently a high demand for the knowledge about healthiness of food. In view of the variety of both different types of food and nutritional aspects it does not come as a surprise that there is no comprehensive repository of that knowledge. Since, however, much of this information is preserved in natural language text, we assume that it is possible to acquire some of this knowledge automatically with the help of natural language processing (NLP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we take a first step towards this endeavour. We try to identify mentions that a food item is healthy (1) or unhealthy (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) There is not a healthy diet without a lot of fruits, vegetables and salads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) The day already began unhealthy: I had a piece of cake for breakfast.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This task is a pre-requisite of more complex tasks, such as finding food items that are suitable for certain groups of people with a particular health condition (3) or identifying reasons for the healthiness or unhealthiness of particular food items (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) Vegetables are healthy, in particular, if you suffer from diabetes. (4) Potatoes are healthy since they are actually low in calories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major problem of identifying some Is-Healthy or Is-Unhealthy relation is that the simple co-occurrence of a food item and the word healthy or unhealthy is not sufficiently predictive as shown in (5)-(7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(5) Chocolate is not healthy. (6) The industry says chocolate is healthy, but I guess this is just a marketing strategy. (7) If chocolate is healthy, then I will run for the next presidential election.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe the contextual phenomena that underlie these cases and provide detailed statistics as to how often they occur in a typical text collection. From this analysis we derive features to be incorporated into a classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments are carried out on German data. We believe, however, that our findings carry over to other languages since the aspects addressed in this work are (mostly) language universal. For the sake of general accessibility, all examples will be given as English translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this is the first work that addresses the classification of healthiness of food items using NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the food domain, the most prominent research addresses ontology or thesaurus alignment (van Hage et al., 2010) , a task in which concepts from different sources are related to each other. In this context, hyponymy relations (van Hage et al., 2005) and part-whole relations (van Hage et al., 2006) have been explored. More recently, Wiegand et al. (2012a) examined extraction methods for relations involved in customer advice in a supermarket. In Chahuneau et al. (2012) , sentiment information has been related to food prices with the help of a large corpus consisting of restaurant menus and reviews.", "cite_spans": [ { "start": 90, "end": 113, "text": "(van Hage et al., 2010)", "ref_id": "BIBREF17" }, { "start": 227, "end": 250, "text": "(van Hage et al., 2005)", "ref_id": "BIBREF15" }, { "start": 276, "end": 299, "text": "(van Hage et al., 2006)", "ref_id": "BIBREF16" }, { "start": 335, "end": 357, "text": "Wiegand et al. (2012a)", "ref_id": "BIBREF19" }, { "start": 449, "end": 472, "text": "Chahuneau et al. (2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the health/medical domain, the majority of research focus on domain-specific relations involving entities, such as genes, proteins and drugs (Cohen and Hersh, 2005) .", "cite_spans": [ { "start": 144, "end": 167, "text": "(Cohen and Hersh, 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recently, the prediction of epidemics (Fisichella et al., 2011; Torii et al., 2011; Diaz-Aviles et al., 2012; Munro et al., 2012) has attracted the attention of the research community. In addition, there has been research on processing healthcare claims (Popowich, 2005) and detecting sentiment in health-related texts (Sokolova and Bobicev, 2011) .", "cite_spans": [ { "start": 43, "end": 68, "text": "(Fisichella et al., 2011;", "ref_id": "BIBREF3" }, { "start": 69, "end": 88, "text": "Torii et al., 2011;", "ref_id": "BIBREF13" }, { "start": 89, "end": 114, "text": "Diaz-Aviles et al., 2012;", "ref_id": "BIBREF2" }, { "start": 115, "end": 134, "text": "Munro et al., 2012)", "ref_id": "BIBREF9" }, { "start": 259, "end": 275, "text": "(Popowich, 2005)", "ref_id": "BIBREF10" }, { "start": 324, "end": 352, "text": "(Sokolova and Bobicev, 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to generate a dataset for our experiments, we used a crawl of chefkoch.de 1 (Wiegand et al., 2012a) consisting of 418, 558 webpages of foodrelated forum entries. chefkoch.de is the largest German web portal for food-related issues.", "cite_spans": [ { "start": 85, "end": 108, "text": "(Wiegand et al., 2012a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The Dataset", "sec_num": "3" }, { "text": "While we are aware of the fact that the healthiness of food items is also discussed in scientific texts we think that the text analysis on social media serves its own purpose. The language in social media is much more accessible to the general population. Moreover, social media can be considered as an exclusive repository of popular wisdom containing, for example, home remedies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Dataset", "sec_num": "3" }, { "text": "As it is impractical for us to manually label the entire web corpus with healthiness information, we extracted for annotation sentences in which there is a healthiness marker and a mention of a food item. By healthiness marker, we understand an expression that conveys the property of being healthy. Apart from the word healthy itself, we came up with 17 further common expressions (e.g. nutritious, healthful or in good health). Since the word healthy covers more than 95% of the mentions of healthiness markers in our entire corpus, however, we decided to restrict our healthiness marker exclusively to mentions of that expression. Thus, our main focus in this classification task is the contextual disambiguation, i.e. the task to decide whether a specific co-occurrence of the expression healthy and some food item denotes a genuine Is-(Un)Healthy relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Healthiness Markers & Food Items", "sec_num": "3.1" }, { "text": "The food items for which we extract cooccurrences with the healthiness marker healthy (Table 7) will henceforth be referred to as target food items. In order to obtain a suitable list of items for our experiments, we manually compiled a list of frequently occurring types of food.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 95, "text": "(Table 7)", "ref_id": null } ], "eq_spans": [], "section": "Healthiness Markers & Food Items", "sec_num": "3.1" }, { "text": "1 www.chefkoch.de 3.2 \"Unhealthy\" vs. \"Not Healthy\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Healthiness Markers & Food Items", "sec_num": "3.1" }, { "text": "In order to obtain instances that express an Is-Unhealthy relation, we exclusively consider negated instances of the Is-Healthy relation (8). We also experimented with a dataset with mentions of the word unhealthy (paired with our target food items) to extract instances such as (9). (8) I am convinced that cake is not healthy. (9) I am convinced that cake is unhealthy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Healthiness Markers & Food Items", "sec_num": "3.1" }, { "text": "Using the same target food items, the unhealthydataset is, however, less than 14% of the size of the healthy-dataset. We also found that instances of the Is-Unhealthy-relation are not easier to detect on the unhealthy-dataset, since the unhealthydataset produced much poorer classifiers for detecting Is-Unhealthy relations than the healthydataset using negations as a proxy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Healthiness Markers & Food Items", "sec_num": "3.1" }, { "text": "Our final dataset comprises 2, 440 instances, where each instance consists of a sentence with the co-occurrence of some food item and the word healthy accompanied by the two sentences immediately preceding and the two sentences immediately following it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4" }, { "text": "The dataset was manually annotated by two German native speakers. On 4 target food items (this corresponds to 574 target sentences) 2 we measured an inter-annotation agreement of Cohen's \u03ba = 0.7374 (Landis and Koch, 1977) which should be sufficiently high for our experiments.", "cite_spans": [ { "start": 210, "end": 221, "text": "Koch, 1977)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4" }, { "text": "The annotators had to choose from a rich set of category labels that particularly divide the negative examples (i.e. those cases in which the cooccurrence of the target food item and healthy neither expresses an Is-Healthy nor an Is-Unhealthy relation) into different categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4" }, { "text": "In the following, we describe the different category labels. Their distribution is shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Annotation", "sec_num": "4" }, { "text": "This class describes instances in which there holds an Is-Healthy relation between the mention of healthy and the target food item (10). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is-Healthy Relation (HLTH)", "sec_num": "4.1" }, { "text": "We already stated in \u00a73.2 that we consider negated instances (11) as instances for the Is-Unhealthy relation. We have a fairly broad notion of negation, e.g. (12) and (13) will also be assigned to this category. These partial negations are at least as frequent as full negations (11). However, we assume that the latter are often employed only as a means of being polite even though the speaker's intention is that of a full negation. The fact that we also observed fewer mentions of unhealthy cooccurring with a target food item than negated mentions of healthy would be in line with this theory (unhealthy is usually perceived to be more intense/blunter than not healthy). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is-Unhealthy Relation (UNHLTH)", "sec_num": "4.2" }, { "text": "Apart from the two target relations, we observe the following other relationships:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Relations", "sec_num": "4.3" }, { "text": "This category describes cases in which the Is-Healthy relation holds provided some additional condition is fulfilled. Typical conditions address a special kind of preparing the target food item 14or make quantitative restrictions as to the amount of the target food item to be consumed (15). As such, one cannot infer from restricted relations to general properties of food items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Relation (RESTR)", "sec_num": "4.3.1" }, { "text": "(14) Steamed vegetables are extremely healthy. (15) A teaspoon of honey each day has been proven to be quite healthy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Relation (RESTR)", "sec_num": "4.3.1" }, { "text": "In relation extraction, syntactic relatedness between the candidate entities of a relation is usually considered an important cue (Zhou et al., 2005; Mintz et al., 2009) . In particular, the specific type of syntactic relation needs to be considered. If in our task healthy is an attributive adjective of the target food item (16), this is not an indication of a genuine Is-Healthy relation that we are looking for. With this construction, one usually refers to all those entities that share the two properties (intersection) of being the target food item and being healthy. This case is different from both HLTH (17) and RESTR (18).", "cite_spans": [ { "start": 130, "end": 149, "text": "(Zhou et al., 2005;", "ref_id": "BIBREF23" }, { "start": 150, "end": 169, "text": "Mintz et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Unspecified Intersection (INTERS)", "sec_num": "4.3.2" }, { "text": "(16) I usually buy the healthy fat. 17Fat is healthy. (18) I usually buy the healthy fat, the one that contains a high degree of unsaturated fatty acids.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unspecified Intersection (INTERS)", "sec_num": "4.3.2" }, { "text": "HLTH, typically realized as a predicative adjective 17, requires that this intersection of properties includes the entire set of entities representing the target food item. For both RESTR and INTERS, on the other hand, this intersection only includes a proper subset of the target food item. In addition, RESTR provides some (vital) additional information about this subset that allows it to be (easily) identified (e.g. the property of containing a high degree of unsaturated fatty acids in (18)). However, for INTERS, no further properties are specified in order to identify it -the information of being healthy is not telling as we actually want to find out how to detect healthy food. As a consequence, instances of type INTERS are hardly informative when it comes to answering whether a particular food item is healthy or not. We do not even know how large the proportion of the intersection with regard to the overall amount of the target food item is. It may well be extremely small. That is why in this work, instances of INTERS will neither be used as evidence for the healthiness nor the unhealthiness of a particular food item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unspecified Intersection (INTERS)", "sec_num": "4.3.2" }, { "text": "If the target food item is compared with another food item with regard to their healthiness status (19) & (20), one cannot conclude anything regarding the absolute healthiness of the target food item. This is due to the fact that a comparison assumes healthiness as a (continuous) scale rather than a binary (discrete) property. It determines the positions of the two food items relative to each other on that particular scale. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Relation (COMP)", "sec_num": "4.3.3" }, { "text": "In our initial data analysis, we found frequent cases in which the author of a forum entry reports a (controversial) statement regarding the healthiness status of a particular food item. These claims are often used as a means of starting a discussion about that issue (21).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupported Claim (CLAIM)", "sec_num": "4.3.4" }, { "text": "(21) Some people claim that chocolate is healthy. What do you make of it?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupported Claim (CLAIM)", "sec_num": "4.3.4" }, { "text": "If it is not possible to infer from such reported statement that the reported view is shared by the author (and we found that this is true for many reported statements), we tag it as CLAIM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupported Claim (CLAIM)", "sec_num": "4.3.4" }, { "text": "There may also be cases in which the Is-(Un)Healthy relation is embedded in a question (22).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question (Q)", "sec_num": "4.3.5" }, { "text": "(22) Is chocolate healthy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question (Q)", "sec_num": "4.3.5" }, { "text": "Irony 23is a figure of speech that can frequently be observed in user-generated text (Tsur et al., 2010) . With a proportion of less than 1%, this, however, does not apply for the forum entries that comprise our data collection.", "cite_spans": [ { "start": 85, "end": 104, "text": "(Tsur et al., 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Irony (IRO)", "sec_num": "4.3.6" }, { "text": "(23) Everyone knows that sweets are healthy, in particular, chocolate with its many calories even makes you lose weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Irony (IRO)", "sec_num": "4.3.6" }, { "text": "In addition to the previous categories CLAIM and IRO, there exist other ways of embedding the healthiness relation into a context so that the general validity of it is discarded. We introduce a common label for all those other remaining types that include, for instance, modal embedding (24) or irrealis construction (25).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding (EMB)", "sec_num": "4.3.7" }, { "text": "(24) Honey could be healthy. (25) If chocolate were healthy, people eating it wouldn't put on so much weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding (EMB)", "sec_num": "4.3.7" }, { "text": "Both the target food item and the German healthiness cue gesund are (potentially) ambiguous expressions. For instance, gesund can be part of several multiword expressions, such as gesunder Menschenverstand (engl. common sense).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Sense (SENSE)", "sec_num": "4.3.8" }, { "text": "While in all previously discussed cases the target food item and healthy are somehow related, there are cases in which the co-occurrence is merely coincidental (26).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No Relation (NOREL)", "sec_num": "4.3.9" }, { "text": "(26) Tomatoes are very healthy and they can be ideally served on bread. (target food item: bread)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No Relation (NOREL)", "sec_num": "4.3.9" }, { "text": "On our dataset, this is the most frequent label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No Relation (NOREL)", "sec_num": "4.3.9" }, { "text": "All features we use are summarized in Table 2 along examples. Apart from bag of words (word), we use following features:", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "The linguistic features are mainly derived from our quantitative data analysis in \u00a74. Given the limited space of this paper, we will only point out some special properties. The first group of (linguistic) features (Table 2 ) is designed to detect some relationship between target food item and healthy. The co-occurrence within the same clause is usually a good predictor. There are three features to establish this property: clause, boundary and otherFood.", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 222, "text": "(Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Linguistic Features", "sec_num": "5.1" }, { "text": "We already pointed out in \u00a74.3.2 that not only syntactic relatedness between healthy and the target food item as such but also the specific syntactic relation plays a decisive role for this task. The two most common relations are that healthy is a predicative adjective (of the target food item), which is usually indicative of HLTH, and that healthy is an attributive adjective (of the target food item), which is usually indicative of INTERS (on our dataset in more than 90% of the instances labeled with INTERS this is the case). This is reflected by the two features predRel and attrRel (and the backoff features pred and attr). An additional feature attrFood captures a special construction in which healthy as an attributive adjective actually denotes HLTH instead of INTERS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Features", "sec_num": "5.1" }, { "text": "For the conditional healthiness RESTR ( \u00a7 4.3.1), we found two predominant subcategories of restrictions: restrictions with regard to the quantity with which the target food item should be consumed (quant) and references to a specific subtype of the target food item, which we want to capture with a few precise surface patterns (spec) and a feature that checks whether the target food item precedes an attributive adjective (attrNoH). Table 2 also contains features to detect various contextual embeddings (opHolder, question, irrealis, modal and irony). opHolder is to detect cases of CLAIM. We assume once some opinion holder other than the author of the forum post (i.e. 1st person pronoun) is identified, there is a CLAIM.", "cite_spans": [], "ref_spans": [ { "start": 436, "end": 443, "text": "Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Linguistic Features", "sec_num": "5.1" }, { "text": "We also investigate whether healthiness correlates with sentiment. For instance, if the author promotes the healthiness of some food item, does this also coincide with positive sentiment (e.g. tasty, good etc.)? Our features positive/negative polar check for the presence of polar expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Features", "sec_num": "5.1" }, { "text": "We also incorporate features referring to the prior knowledge of healthiness of food items. We use a lexicon introduced in Wiegand et al. 2012bwhich covers approximately 3000 food items, and we refer to it as healthiness lexicon. Each food item is specified as being either healthy or unhealthy in that lexicon. The healthiness judgment has been carried out based on the general nutrient content of each food item. A detailed description of the annotation scheme and annotation agreement can be found in Wiegand et al. (2012b) . The specific features derived from that lexical resource are listed in Table 2 . They are divided into two groups. prior describes the prior healthiness of the target food item. Since our task is to determine the contextual healthiness, the usage of such a feature is legitimate. The contextual healthiness need not to coincide with the prior healthiness. For instance, in (27), chocolate is described as a healthy food item even though it is a priori considered unhealthy.", "cite_spans": [ { "start": 504, "end": 526, "text": "Wiegand et al. (2012b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 600, "end": 607, "text": "Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Knowledge-based Features using a Healthiness Lexicon", "sec_num": "5.2" }, { "text": "(27) Chocolate is healthy as it's high in magnesium and provides vitamin E.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-based Features using a Healthiness Lexicon", "sec_num": "5.2" }, { "text": "We use this knowledge as a baseline. If we cannot exceed the classification performance of prior (alone), then acquiring the knowledge of healthiness with the help of NLP is hardly effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-based Features using a Healthiness Lexicon", "sec_num": "5.2" }, { "text": "priorCont describes the prior healthiness status of neighbouring food items in the given context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-based Features using a Healthiness Lexicon", "sec_num": "5.2" }, { "text": "We also examine rule-based classifiers since they can be built without any training data. Each classifier is defined by a (large) conjunction of linguistic features. Features indicating a class other than the target class are used as negated features in that conjunction. The rule-based classifiers only consider features where a positive or negative correlation towards the target class is (more or less) obvious. Table 3 shows the rule-based classifiers for each of our classes. For HLTH, it basically states that healthy has to be a predicative adjective of the target food item (predRel), and the target food item and healthy have to appear within the same clause (or there is no boundary sign between them). After that, a long list of negated features follows: quant, spec and attrNoH, for exam-HLTH predRel \u2227 (clause \u2228 \u00acboundary) \u2227 \u00acquant \u2227 \u00acspec \u2227 \u00acattrNoH \u2227 \u00acnegTarget \u2227 \u00acnegHealth \u2227 \u00accomp \u2227 \u00acopHolder \u2227 \u00acmodal \u2227 \u00acirrealis \u2227 \u00acquestion \u2227 \u00acsense \u2227 \u00acweird UNHLTH predRel \u2227 (clause \u2228 \u00acboundary) \u2227 \u00acquant \u2227 \u00acspec \u2227 \u00acattrNoH \u2227 (negTarget \u2228 negHealth) \u2227 \u00accomp \u2227 \u00acopHolder \u2227 \u00acmodal \u2227 \u00acirrealis \u2227 \u00acquestion \u2227 \u00acsense \u2227 \u00acweird ", "cite_spans": [], "ref_spans": [ { "start": 415, "end": 422, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Rule-based Classification", "sec_num": "6" }, { "text": "In this section we present the results on automatic classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "In this subsection, we evaluate the performance of the different feature sets on sentence-level classification using supervised learning and rule-based classification. We investigate the detection of the two classes HLTH ( \u00a74.1) and UNHLTH ( \u00a74.2). Each instance to be classified is a sentence in which there is a co-occurrence of a target food item and a mention of healthy along its respective context sentences. The dataset was parsed using the Stanford Parser (Rafferty and Manning, 2008) . We carry out a 5-fold cross-validation on our manually labeled dataset. As a supervised classifier, we use Support Vector Machines (SVM light (Joachims, 1999) with a linear kernel). For each class, we train a binary classifier where positive instances represent the class to be extracted while negative instances are the remaining instances of the entire dataset ( \u00a74). Is there a cue indicating an opinion holder other than the author? opHolder Some people claim that chocolate is healthy. This feature relies on a set of predicates indicating the presence of an opinion holder (Wiegand and Klakow, 2011) . Is target sentence a (direct) question? question Is chocolate healthy? Is healthy embedded in some irrealis context? irrealis If honey were healthy; I wonder, whether honey is healthy. Translation of the cues used in hedge classification (Morante and Daelemans, 2009) . Is healthy modified by a modal verb? modal Honey might be healthy. Is target food item negated?", "cite_spans": [ { "start": 464, "end": 492, "text": "(Rafferty and Manning, 2008)", "ref_id": "BIBREF11" }, { "start": 637, "end": 653, "text": "(Joachims, 1999)", "ref_id": "BIBREF4" }, { "start": 1074, "end": 1100, "text": "(Wiegand and Klakow, 2011)", "ref_id": "BIBREF18" }, { "start": 1341, "end": 1370, "text": "(Morante and Daelemans, 2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Classification of Individual Utterances", "sec_num": "7.1" }, { "text": "negTarget No cake is healthy. We adapted to German the negation word lists and the scope modeling from Wilson et al. (2005) . Is healthy negated?", "cite_spans": [ { "start": 103, "end": 123, "text": "Wilson et al. (2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Various Feature Sets", "sec_num": "7.1.1" }, { "text": "negHealth Chocolate is not healthy. We adapted to German the negation word lists and the scope modeling from Wilson et al. (2005) . Is there any occurrence of a weird word?", "cite_spans": [ { "start": 109, "end": 129, "text": "Wilson et al. (2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Various Feature Sets", "sec_num": "7.1.1" }, { "text": "weird Sure, chocolate is veeeeery healthy. Regular expression detecting suspicious reduplications of characters in order to detect irony. Does the context suggest that healthy is part of a comparison? comp We check for typical inflectional word forms (i.e. healthier and healthiest) and constructions, such as as healthy as.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Various Feature Sets", "sec_num": "7.1.1" }, { "text": "Does the context of healthy suggest another sense of the word? sense Contexts in which healthy has a different meaning (using online dictionaries, such as www.duden.de/rechtschreibung/gesund and de.wiktionary.org/wiki/gesund). Number of positive/negative polar expressions (excluding mentions of healthy) polar* Usage of the German PolArt sentiment lexicon (Klenner et al., 2009) .", "cite_spans": [ { "start": 357, "end": 379, "text": "(Klenner et al., 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Various Feature Sets", "sec_num": "7.1.1" }, { "text": "Number of near synonyms of (un)healthy syno* Examples for healthy: high in vitamin, tonic, etc.; examples for unhealthy: carcinogenic, harmful, etc. (manually compiled list of 99 synonyms by an annotator not involved in feature engineering). Number of diseases disease* 411 entries, created with the help of the web (bildung.wikia.com/ wiki/Alphabetische Liste der Krankheiten).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Various Feature Sets", "sec_num": "7.1.1" }, { "text": "Abbrev.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature", "sec_num": null }, { "text": "Is target food item a priori healthy? prior* Feature employs the healthiness lexicon from Wiegand et al. (2012b) . Is target food item a priori unhealthy?", "cite_spans": [ { "start": 90, "end": 112, "text": "Wiegand et al. (2012b)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Illustration/Further Information", "sec_num": null }, { "text": "Number of food items (excluding target food item) that are a priori healthy priorCont* Feature employs the healthiness lexicon from Wiegand et al. (2012b) . Number of food items (excluding target food item) that are a priori unhealthy *: there exist two features which differ in the context they consider: (a) only target sentence (indicated by suffix -TS) (b) entire context (indicated by suffix -EC) baseline is prior (see \u00a75.2 for motivation). take-all has optimal recall but a very poor precision. The second baseline prior is notably better. prior may help to distinguish between HLTH and UNHLTH but it does not contribute to distinguishing these classes from the rest of the relation types (Table 1) .", "cite_spans": [ { "start": 132, "end": 154, "text": "Wiegand et al. (2012b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 696, "end": 705, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Illustration/Further Information", "sec_num": null }, { "text": "If we turn to the features that largely exploit contextual information, i.e. word and linguistic ( \u00a75.1), we find that both features are better than the previous features. This is an indication that learning from text is effective. The same can be said about word+linguistic and word+prior, which also outperform word. word+linguistic+prior is the best feature set outperforming both word+linguistic and word+prior. We conclude that all of the three groups of features we presented in \u00a75 are relevant for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustration/Further Information", "sec_num": null }, { "text": "In terms of recall and F-score the supervised classifier always outperforms the rule-based classifier. This does not come as a surprise as the supervised classifier learns from labeled training data while the rule-based classifier is unsupervised. On the other hand, we also find that the precision of the rule-based classifier largely outperforms our best supervised classifier on HLTH.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustration/Further Information", "sec_num": null }, { "text": "The fact that the best overall F-score achieved is not higher may be ascribed to the heavy noise (spelling/grammar mistakes) contained in our web-data. However, we believe that even with those data we can show the relative effectiveness of the different feature types which is the most relevant aspect in our proof-of-concept investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Illustration/Further Information", "sec_num": null }, { "text": "HLTH prom, attrNoH, predRel, comp, negHealth, negative polarEC, sense, opHolder, irrealis UNHLTH negHealth, negTarget, attrRel, comp, diseaseTS, negative po-larEC Table 5 : List of the best subset of linguistic features (Table 2) for each individual class. Table 5 shows the best performing feature subset using a best-first forward selection as implemented in Weka (Witten and Frank, 2005 ", "cite_spans": [ { "start": 366, "end": 389, "text": "(Witten and Frank, 2005", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 5", "ref_id": null }, { "start": 220, "end": 229, "text": "(Table 2)", "ref_id": "TABREF6" }, { "start": 257, "end": 264, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Class Features", "sec_num": null }, { "text": "We now take a closer look at anti-prior instances which are utterances in which the relation expressed is opposite to the relation that one would a priori assume, e.g. chocolate is healthy instead of chocolate is unhealthy. In our gold standard, we identified these instances with the help of the actual (manually assigned) label and our healthiness lexicon ( \u00a75.2). 4 Such instances may be very interesting to extract, even though they are rare (15% on HLTH and UNHTLH). Previously, supervised classifiers with word+prior produced similar performance as classifiers with word+linguistic (Table 4). Since linguistic features are fairly expensive to produce, the prior knowledge of healthiness seems an attractive alternative. But this is misleading. Table 6 displays the recall (by supervised classification) on only anti-prior instances and shows that the usage of prior which, in isolation, would detect none of these instances, gives a much lower recall than linguistic when added to word. Therefore, word+linguistic would be the preferable feature set if one had to choose between word+prior and word+linguistic. Table 6 : Recall on anti-prior instances.", "cite_spans": [], "ref_spans": [ { "start": 750, "end": 757, "text": "Table 6", "ref_id": null }, { "start": 1117, "end": 1124, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Detecting Anti-Prior Healthiness", "sec_num": "7.1.3" }, { "text": "Finally, we automatically rank food items according to healthiness based on the aggregate of text mentions. Ideally, the ranking should separate healthy from unhealthy food items. We want to know whether with our text corpus and contextual classification, one can actually approximate a correct prior healthiness. Aggregate classification means that we make a healthiness prediction for a specific food item based on all text mentions of that food item co-occurring with the word healthy. It may be easier to achieve a robust aggregate classification than a robust individual classification. This is because in aggregate-based tasks, there is a certain degree of redundancy contained in the data, as instances of a group of utterances (belonging to the same food item) may often comprise similar information. For such classifiers, one should focus on a higher precision since a reasonable recall is enabled by the redundancy in the data. Our baseline RAW is completely unsupervised and does not include any linguistic processing. We use the Pointwise Mutual Information (PMI) which is estimated on our large web corpus ( \u00a73). 5 P MI(f ood item, healthy) = log P (f ood item, healthy) P (f ood item)P (healthy)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "For the automatic classification, we consider LEARN which uses the output of the supervised classifier comprising the features word+linguistic (we must exclude the feature prior as this would include the knowledge we want to predict automatically in this experiment) 6 while RB is the output of the rule-based classifier we presented in \u00a76 (which does not contain prior as a feature either).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "In order to convert the classifications of individual utterances for a target food item (by LEARN and RB) to one ranking score (according to which we rank all the target food items), we simply compute the ratio between instances predicted to be healthy and those predicted to be unhealthy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "score LEARN/RB (f ood item) = #HLTH predicted (f ood item) #UNHLTH predicted (f ood item)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "(2) 5 For P (f ood item, healthy), we consider all sentences in which the target food item and healthy co-occur. 6 We train for each target food item a classifier using only the instances with the other target food items as training data.", "cite_spans": [ { "start": 4, "end": 5, "text": "5", "ref_id": null }, { "start": 113, "end": 114, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "RAW wholemeal product \u227b fat \u227b colza oil \u227b vegetables \u227b tea \u227b protein \u227b olive oil \u227b honey \u227b meat \u227b sugar \u227b salad \u227b bread \u227b chocolate \u227b potato \u227b rice \u227b banana \u227b cake \u227b water \u227b egg LEARN banana \u227b olive oil \u227b wholemeal product \u227b tea \u227b colza oil \u227b salad \u227b vegetables \u227b protein \u227b potato \u227b chocolate \u227b meat \u227b bread \u227b rice \u227b water \u227b sugar \u227b cake \u227b egg \u227b fat \u227b honey RB potato \u227b protein \u227b wholemeal product \u227b banana \u227b olive oil \u227b vegetables \u227b bread \u227b salad \u227b water \u227b tea \u227b colza oil \u227b rice \u227b honey \u227b egg \u227b chocolate \u227b fat \u227b meat \u227b sugar \u227b cake Table 7 : Aggregate ranking; green denotes (actual) healthy items, red (actual) unhealthy items.", "cite_spans": [], "ref_spans": [ { "start": 535, "end": 542, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "where #HLTH predicted (f ood item) are the number of instances the classifier predicts the label HLTH for the target food item while UNHLTH predicted (f ood item) are the number of instances labeled as UNHLTH, respectively. Table 7 shows the results of the three rankings. The actual labels are derived from the healthiness lexicon ( \u00a75.2). The table clearly shows that the ranking produced by RAW contains most errors. fat is the second most highly ranked food item. This can be explained by the high proportion of INTERS ( \u00a74.3.2) among the co-occurrences of fat and healthy (almost 50%). LEARN and RB produce a better ranking, thus proving that a contextual (linguistic) analysis is helpful for this task. RB also outperforms LEARN presumably because of its much higher precision (as measured for individual classification in Table 4 : 53.4% vs. 40.2% for HLTH and 45.0% vs. 40.9% for UNHLTH).", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 7", "ref_id": null }, { "start": 829, "end": 836, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Aggregate Classification", "sec_num": "7.2" }, { "text": "We presented a first step towards contextual healthiness classification of food items. For this task, we introduced a new annotation scheme. Our annotation revealed that many different linguistic phenomena are involved. Thus, this problem can be considered an interesting task for NLP. We demonstrated that a linguistic analysis is not only necessary for classifying individual utterances but also for ranking food items based on an aggregate of text mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "This is the only part of the dataset which was annotated by both annotators in parallel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Restricting the co-occurrence to a certain window size did not improve the F-Score of take-all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Whenever HLTH co-occurs with prior unhealthiness (according to the healthiness lexicon) or UNHLTH co-occurs with prior healthiness, there is an anti-prior instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was performed in the context of the Software-Cluster project EMERGENT. Michael Wiegand was funded by the German Federal Ministry of Education and Research (BMBF) under grant no. \"01IC10S01\". The authors would like to thank Stephanie K\u00f6ser and Eva Lasarcyk for annotating the dataset presented in this paper. We would also like to thank Benjamin Roth for interesting discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Salad: Relating Food Prices and Descriptions", "authors": [ { "first": "Victor", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Bryan", "middle": [ "R" ], "last": "Routledge", "suffix": "" }, { "first": "Lily", "middle": [], "last": "Scherlis", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL)", "volume": "", "issue": "", "pages": "1357--1367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Chahuneau, Kevin Gimpel, Bryan R. Routledge, Lily Scherlis, and Noah A. Smith. 2012. Word Salad: Relating Food Prices and Descriptions. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learn- ing (EMNLP/CoNLL), pages 1357-1367, Jeju Island, Ko- rea.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A survey of current work in biomedical text mining", "authors": [ { "first": "Aaron", "middle": [ "M" ], "last": "Cohen", "suffix": "" }, { "first": "William", "middle": [ "R" ], "last": "Hersh", "suffix": "" } ], "year": 2005, "venue": "Briefings in Bioinformatics", "volume": "6", "issue": "", "pages": "57--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron M. Cohen and William R. Hersh. 2005. A survey of current work in biomedical text mining. Briefings in Bioinformatics, 6:57 -71.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Epidemic Intelligence for the Crowd, by the Crowd", "authors": [ { "first": "Ernesto", "middle": [], "last": "Diaz-Aviles", "suffix": "" }, { "first": "Avar", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Velasco", "suffix": "" }, { "first": "Kerstin", "middle": [], "last": "Denecke", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Nejdl", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ernesto Diaz-Aviles, Avar Stewart, Edward Velasco, Kerstin Denecke, and Wolfgang Nejdl. 2012. Epidemic Intelli- gence for the Crowd, by the Crowd. In Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM), Dublin, Ireland.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Detecting Health Events on the Social Web to Enable Epidemic Intelligence", "authors": [ { "first": "Marco", "middle": [], "last": "Fisichella", "suffix": "" }, { "first": "Avar", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Alfredo", "middle": [], "last": "Cuzzocrea", "suffix": "" }, { "first": "Kerstin", "middle": [], "last": "Denecke", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Symposium on String Processing and Information Retrieval (SPIRE)", "volume": "", "issue": "", "pages": "87--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Fisichella, Avar Stewart, Alfredo Cuzzocrea, and Ker- stin Denecke. 2011. Detecting Health Events on the So- cial Web to Enable Epidemic Intelligence. In Proceedings of the International Symposium on String Processing and Information Retrieval (SPIRE), pages 87-103, Pisa, Italy.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Making Large-Scale SVM Learning Practical", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in Kernel Methods -Support Vector Learning", "volume": "", "issue": "", "pages": "169--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999. Making Large-Scale SVM Learn- ing Practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -Support Vector Learning, pages 169-184. MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Robust Compositional Polarity Classification", "authors": [ { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "" }, { "first": "Stefanos", "middle": [], "last": "Petrakis", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fahrni", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "180--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Klenner, Stefanos Petrakis, and Angela Fahrni. 2009. Robust Compositional Polarity Classification. In Proceedings of Recent Advances in Natural Language Processing (RANLP), pages 180-184, Borovets, Bulgaria.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Measurement of Observer Agreement for Categorical Data", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Richard Landis and Gary G. Koch. 1977. The Measure- ment of Observer Agreement for Categorical Data. Bio- metrics, 33(1):159-174.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distant Supervision for Relation Extraction without Labeled Data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL/IJCNLP)", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant Supervision for Relation Extraction with- out Labeled Data. In Proceedings of the Joint Confer- ence of the Annual Meeting of the Association for Com- putational Linguistics and the International Joint Confer- ence on Natural Language Processing of the Asian Fed- eration of Natural Language Processing (ACL/IJCNLP), pages 1003-1011, Singapore.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning the Scope of Hedge Cues in Biomedical Texts", "authors": [ { "first": "Roser", "middle": [], "last": "Morante", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the BioNLP Workshop", "volume": "", "issue": "", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roser Morante and Walter Daelemans. 2009. Learning the Scope of Hedge Cues in Biomedical Texts. In Proceed- ings of the BioNLP Workshop, pages 28-36, Boulder, CO, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tracking Epidemics with Natural Language Processing and Crowdsourcing", "authors": [ { "first": "Robert", "middle": [], "last": "Munro", "suffix": "" }, { "first": "Lucky", "middle": [], "last": "Gunasekara", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Nevins", "suffix": "" }, { "first": "Lalith", "middle": [], "last": "Polepeddi", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Rosen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Spring Symposium for Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "52--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Munro, Lucky Gunasekara, Stephanie Nevins, Lalith Polepeddi, and Evan Rosen. 2012. Tracking Epidemics with Natural Language Processing and Crowdsourcing. In Proceedings of the Spring Symposium for Association for the Advancement of Artificial Intelligence (AAAI), pages 52-58, Toronto, Canada.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using Text Mining and Natural Language Processing for Health Care Claims Processing", "authors": [ { "first": "Fred", "middle": [], "last": "Popowich", "suffix": "" } ], "year": 2005, "venue": "SIGKDD Explorations", "volume": "7", "issue": "1", "pages": "59--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fred Popowich. 2005. Using Text Mining and Natural Language Processing for Health Care Claims Processing. SIGKDD Explorations, 7(1):59-66.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Parsing Three German Treebanks: Lexicalized and Unlexicalized Baselines", "authors": [ { "first": "Anna", "middle": [], "last": "Rafferty", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL Workshop on Parsing German (PaGe)", "volume": "", "issue": "", "pages": "40--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rafferty and Christopher D. Manning. 2008. Parsing Three German Treebanks: Lexicalized and Unlexicalized Baselines. In Proceedings of the ACL Workshop on Pars- ing German (PaGe), pages 40-46, Columbus, OH, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sentiments and Opinions in Health-related Web Messages", "authors": [ { "first": "Marina", "middle": [], "last": "Sokolova", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Bobicev", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Sokolova and Victoria Bobicev. 2011. Sentiments and Opinions in Health-related Web Messages. In Pro- ceedings of Recent Advances in Natural Language Pro- cessing (RANLP), pages 132-139, Hissar, Bulgaria.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An exploratory study of a text classification framework for internet-based surveillance of emerging epidemics", "authors": [ { "first": "Manabu", "middle": [], "last": "Torii", "suffix": "" }, { "first": "Lanlan", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Thang", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Chand", "middle": [ "T" ], "last": "Mazumdar", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Hartley", "suffix": "" }, { "first": "P", "middle": [], "last": "Noele", "suffix": "" } ], "year": 2011, "venue": "International Journal of Medical Informatics", "volume": "80", "issue": "1", "pages": "56--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manabu Torii, Lanlan Yin, Thang Nguyen, Chand T. Mazum- dar, Hongfang Liu, David M. Hartley, and Noele P. Nel- son. 2011. An exploratory study of a text classifica- tion framework for internet-based surveillance of emerg- ing epidemics. International Journal of Medical Infor- matics, 80(1):56-66.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "ICWSM -A Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews", "authors": [ { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the International AAAI Conference on Weblogs and Social Media (ICWSM), Washington", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. ICWSM -A Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Re- views. In Proceedings of the International AAAI Confer- ence on Weblogs and Social Media (ICWSM), Washing- ton, DC, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Method to Combine Linguistic Ontology-Mapping Techniques", "authors": [ { "first": "Willem", "middle": [], "last": "Robert Van Hage", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Katrenko", "suffix": "" }, { "first": "Guus", "middle": [], "last": "Schreiber", "suffix": "" } ], "year": 2005, "venue": "Proceedings of International Semantic Web Conference (ISWC)", "volume": "", "issue": "", "pages": "732--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Willem Robert van Hage, Sophia Katrenko, and Guus Schreiber. 2005. A Method to Combine Linguistic Ontology-Mapping Techniques. In Proceedings of Inter- national Semantic Web Conference (ISWC), pages 732 - 744, Galway, Ireland. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Method for Learning Part-Whole Relations", "authors": [ { "first": "Willem", "middle": [], "last": "Robert Van Hage", "suffix": "" }, { "first": "Hap", "middle": [], "last": "Kolb", "suffix": "" }, { "first": "Guus", "middle": [], "last": "Schreiber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of International Semantic Web Conference (ISWC)", "volume": "", "issue": "", "pages": "723--735", "other_ids": {}, "num": null, "urls": [], "raw_text": "Willem Robert van Hage, Hap Kolb, and Guus Schreiber. 2006. A Method for Learning Part-Whole Relations. In Proceedings of International Semantic Web Conference (ISWC), pages 723 -735, Athens, GA, USA. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The OAEI food task: an analysis of a thesaurus alignment task", "authors": [ { "first": "Willem", "middle": [], "last": "Robert Van Hage", "suffix": "" }, { "first": "Margherita", "middle": [], "last": "Sini", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Finch", "suffix": "" }, { "first": "Hap", "middle": [], "last": "Kolb", "suffix": "" }, { "first": "Guus", "middle": [], "last": "Schreiber", "suffix": "" } ], "year": 2010, "venue": "Applied Ontology", "volume": "5", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Willem Robert van Hage, Margherita Sini, Lori Finch, Hap Kolb, and Guus Schreiber. 2010. The OAEI food task: an analysis of a thesaurus alignment task. Applied Ontology, 5(1):1 -28.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Role of Predicates in Opinion Holder Extraction", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the RANLP Workshop on Information Extraction and Knowledge Acquisition (IEKA)", "volume": "", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand and Dietrich Klakow. 2011. The Role of Predicates in Opinion Holder Extraction. In Proceedings of the RANLP Workshop on Information Extraction and Knowledge Acquisition (IEKA), pages 13-20, Hissar, Bul- garia.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Web-based Relation Extraction for the Food Domain", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the International Conference on Applications of Natural Language Processing to Information Systems (NLDB)", "volume": "", "issue": "", "pages": "222--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Benjamin Roth, and Dietrich Klakow. 2012a. Web-based Relation Extraction for the Food Do- main. In Proceedings of the International Conference on Applications of Natural Language Processing to Infor- mation Systems (NLDB), pages 222-227, Groningen, the Netherlands. Springer.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Gold Standard for Relation Extraction in the Food Domain", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Lasarcyk", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "K\u00f6ser", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "507--514", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Benjamin Roth, Eva Lasarcyk, Stephanie K\u00f6ser, and Dietrich Klakow. 2012b. A Gold Standard for Relation Extraction in the Food Domain. In Proceedings of the Conference on Language Resources and Evaluation (LREC), pages 507-514, Istanbul, Turkey.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Recognizing Contextual Polarity in Phrase-level Sentiment Analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP)", "volume": "", "issue": "", "pages": "347--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase-level Senti- ment Analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 347- 354, Vancouver, BC, Canada.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Data Mining: Practical Machine Learning Tools and Techniques", "authors": [ { "first": "Ian", "middle": [], "last": "Witten", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kauf- mann Publishers, San Francisco, US.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Exploring Various Knowledge in Relation Extraction", "authors": [ { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "427--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring Various Knowledge in Relation Extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 427-434, Ann Arbor, MI, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Potatoes are incredibly healthy, versatile in the kitchen and very tasty.", "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "num": null, "content": "
TypeAbbrev.FrequencyPercentage
Is-HealthyHLTH48820.00
Is-UnhealthyUNHLTH1717.01
OTHER:
No RelationNOREL78832.30
Restricted RelationRESTR31212.79
Unspecified IntersectionINTERS1988.11
EmbeddingEMB1576.43
Comparison RelationCOMP1214.96
Unsupported ClaimCLAIM873.57
Other SenseSENSE773.16
IronyIRO251.02
QuestionQ160.66
", "type_str": "table", "text": "shows that less than 20% of the cooccurrences of the target food item and healthy express this relation. This may already indicate that its extraction is difficult." }, "TABREF1": { "html": null, "num": null, "content": "
: Statistics of the different (linguistic) phe-
nomena.
", "type_str": "table", "text": "" }, "TABREF4": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Rule-based classifiers based on linguistic features(Table 2). ple, are negated because they are typical cues for RESTR. The remaining features are negated since they are either indicative of UNHTLTH, COMP, EMB, CLAIM, SENSE, IRO or Q. The classifier for UNHLTH only differs from HLTH in that either of the negation cues, i.e. negTarget or negHealth, has to be present." }, "TABREF5": { "html": null, "num": null, "content": "
Word-based Features
", "type_str": "table", "text": "lists the results for various feature sets that we experimented with. take-all is an unsupervised baseline that considers all instances of our dataset as positive instances (of the class which is examined, i.e. HLTH or UNHLTH). In other words, this baseline indicates how well the mere co-occurrence of healthy and the target food item predicts either of our two classes.3 Our second bag of words between the mention of healthy and target food item, and the additional words that precede or follow healthy and target Are target food item and healthy within the same clause? clause I like chocolatetarget, even though I consider fruits the healthy option for snacks. Feature operates on parse output. Is there a punctuation mark between target food item and healthy?boundary I know that vegetables are extremely healthy; but I prefer chocolatetarget.Token-level back-off feature to clause. Is there another food item between target food item and healthy?otherFood We always had healthy meals with lots of vegetables and salad, but this does not mean that we were not allowed to eat chocolatetarget. Token-level back-off feature to clause." }, "TABREF6": { "html": null, "num": null, "content": "
HLTHUNHLTH
FeaturesPre Rec F1Pre Rec F1
take-all (baseline 1)20.3 100.0 33.76.9 100.0 13.0
prior (baseline 2)28.0 87.3 42.329.7 44.0 35.3
priorCont21.2 96.9 34.714.3 34.8 20.3
prior+priorCont28.0 86.9 42.329.7 44.0 35.3
word35.9 66.5 46.639.7 42.5 41.0
linguistic38.3 66.1 48.335.9 43.5 39.1
word+linguistic40.2 63.6 49.1 *40.9 47.1 43.4 *
word+prior38.1 70.1 49.2 \u202246.7 43.3 44.7
word+priorCont35.0 65.3 45.540.0 42.9 41.0
word+prior+priorCont37.4 70.8 48.8 \u202246.8 42.8 44.4
word+linguistic+priorCont41.4 64.3 50.242.8 42.1 41.7
word+linguistic+prior44.1 68.3 53.3 \u2022 \u2020 \u2021 44.8 60.5 51.1 \u2022 \u2020 \u2021
all features44.5 69.3 53.9 \u2022 \u2020 \u2021 42.9 63.5 51.0 \u2022 \u2020 \u2021
rule-based53.4 17.9 26.845.0 11.0 17.7
significantly better than word * at p < 0.1/ \u2022 at p < 0.05; better than
word+linguistic \u2020 at p < 0.05; better than word+prior \u2021 at p < 0.05
(paired t-test)
", "type_str": "table", "text": "Description of the feature set; the set contains several cue word lists, in order to avoid overfitting, we either translated existing resources from English or used diverse web-resources that are not related to our dataset." }, "TABREF7": { "html": null, "num": null, "content": "", "type_str": "table", "text": "Comparison of different feature sets." } } } }