{ "paper_id": "I13-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:14:42.597350Z" }, "title": "Natural Language Query Refinement for Problem Resolution from Crowd-Sourced Semi-Structured Data", "authors": [ { "first": "Rashmi", "middle": [], "last": "Gangadharaiah", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "India Research Lab" } }, "email": "rashgang@in.ibm.com" }, { "first": "Balakrishnan", "middle": [], "last": "Narayanaswamy", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "India Research Lab" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study the problem of natural language query generation for decision support systems (DSS) in the problem resolution domain. In this domain, a user has a task he is unable to accomplish (eg. bluetooth headphone not playing music), which we capture using language structures. We show how important units that define a problem can robustly and automatically be extracted from large noisy online forum data, with no labeled data or query logs. We also show how these units can be selected to reduce the number of interactions and how they can be used to generate natural language interactions for query refinement.", "pdf_parse": { "paper_id": "I13-1028", "_pdf_hash": "", "abstract": [ { "text": "We study the problem of natural language query generation for decision support systems (DSS) in the problem resolution domain. In this domain, a user has a task he is unable to accomplish (eg. bluetooth headphone not playing music), which we capture using language structures. We show how important units that define a problem can robustly and automatically be extracted from large noisy online forum data, with no labeled data or query logs. We also show how these units can be selected to reduce the number of interactions and how they can be used to generate natural language interactions for query refinement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Decision Support Systems (DSSs) that help decision makers extract useful knowledge from large amounts of data have found widespread application in areas ranging from clinical and medical diagnosis (Musen et al., 2006) to banking and credit verification (Palma-dos Reis et al., 1999) . IBM's Watson Deep Question Answering system (Ferrucci et al., 2010) , can be applied to DSSs to diagnose and recommend treatments for lung cancer and to help manage health insurance decisions and claims 1 . Motivated by the rapid explosion of contact centres, we focus on the application of DSSs to assist technical contact center agents.", "cite_spans": [ { "start": 197, "end": 217, "text": "(Musen et al., 2006)", "ref_id": "BIBREF24" }, { "start": 253, "end": 282, "text": "(Palma-dos Reis et al., 1999)", "ref_id": "BIBREF25" }, { "start": 329, "end": 352, "text": "(Ferrucci et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contact center DSSs should be designed to assist an agent in the problem resolution domain. This domain is characterized by a user calling in to a contact center with the problem of being unable to perform an action with their product (e.g. I am unable to connect to youtube). Currently contact center DSSs are essentially search engines for technical manuals. However, this has two shortcomings : (i) in most cutting edge consumer technology, like software and smart devices, the range of possible applications and use cases makes it impossible to list all of them in the manuals-limiting their usefulness under the heavy tailed nature of customer problems, (ii) contact centers are known to suffer from high churn due to pressures and difficulties of the jobs, particularly the need for rapid resolution, making ease of use essential since users of these DSSs are somewhere between experts (in using the system) and novices (in the actual technology customers need help with).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With the birth and growth of the Web 2.0 and in particular, large and active online product forums, such as, Yahoo! Answers 2 , Ubuntu Forums 3 and Apple Support Communities 4 , there is the hope that other technology savvy users will find and resolve large number of problems within days of the release of a product. However, these forums are noisy, i.e. they contain many throw-away comments and erroneous solutions. The first important question we address in this paper is, how can we mine relevant information from online forums and, in essence, crowdsource the creation of contact center DSSs? In particular, we show how many problems faced by consumers can be captured by actions on attributes (e.g. bluetooth headphone not playing music).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to address the second shortcoming, we study the problem of automatic interactive query refinement in DSSs. When DSSs are used by noncomputer scientists, natural language understanding and interaction problems take center-stage (Alter, 1977) . Since both customers and agents are not experts in a technical area, mis-understandings are common. As agents are evaluated based on the number of problems resolved, it is often the case that queries entered by an agent are underspecified. In response to such a query, a search engine may return a large number of documents. For complicated technical queries, the time taken by an agent to read the long list of returned information and possibly reformulate the query could be significant. The second question we address in this paper is how can a DSS make natural language suggestions that assist the agent in acquiring additional information from a caller to resolve her problem in the shortest amount of time? Finally, for rapid prototyping and deployment, we develop a system and architecture that does not use any form of labeled data or query logs, a big differentiator from prior work. Query Logs are not always available for enterprise systems that are not on the web and/or have a smaller user base (Bhatia et al., 2011) . When software and hardware change rapidly over time, it is infeasible to quickly collect large query logs. Also, logs may not always be accessible due to privacy and legal constraints.", "cite_spans": [ { "start": 236, "end": 249, "text": "(Alter, 1977)", "ref_id": "BIBREF0" }, { "start": 1260, "end": 1281, "text": "(Bhatia et al., 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To the best of our knowledge, this paper presents the first interactive system (and detailed evaluation thereof) for natural language problem resolution in the absence of manually labeled logs or pre-determined dialog state sequences. Concretely, our primary contributions are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Problem Representation and Unit Extraction: We define and automatically extract units that best represent a problem. We show how to do this robustly from noisy forum threads, allowing us to use abundant online user generated content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Unit selection for Interaction: We propose and evaluate a complete interactive system for users to quickly find a resolution based on semi-structured online forum data. Follow up questions are generated automatically from the retrieved results to minimize the number of interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Natural Language Question Generation: We demonstrate that, in a dialog system it is possible and useful to automatically generate fluent interactions based on the units we define using appropriate templates. We use these to create follow up questions to the user, which have much needed context, and show that this improves precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In online forums, people facing issues with their product (thread initiators) post their problems and other users write subsequent posts discussing solutions. These threads form a rich data source that could contain problems similar to what a user who calls in to the contact center faces, and can be used to find an appropriate solution. Our system ( Figure 1 ) has two phases. In the offline phase, the system extracts units that describe the problem being discussed. In the online phase, the interaction engine selects the most appropriate units that best divide the space of search results obtained, to minimize the number of interactions. The system then generates follow up interactions by inserting the units into appropriate unit type dependent templates. The answers to these follow up questions are used to improve the search results. ", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Proposed System", "sec_num": "2" }, { "text": "It is important to select representational units that capture the signature, or the most important characteristics of the information that users search for. This signature should be sufficient to find relevant results. In order to understand the right units for the problem resolution domain, we conducted the following user study. Five annotators analyzed the first posts from 50 threads from the Apple Discussion Forum, and were asked to mark the most relevant short segments of the post that best described the problem (an example in Table 1 ). Based on the user study, the first kind of units we considered were phrases, which are consecutive words that occur very frequently in the thread.", "cite_spans": [], "ref_spans": [ { "start": 537, "end": 544, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Representational Units", "sec_num": "2.1" }, { "text": "Phrases as query suggestions have been shown to improve user experience when compared to just showing terms (Kelly et al., 2009 ) since longer phrases tend to provide more context information. One shortcoming of these contiguous phrasal units is that they are sensitive to typography, i.e. small changes in phrasing (e.g. ios -4 and ios 4 ) lead to different phrases and the occurrence counts are divided among these variations. This causes difficulties both in the problem representation as well as in the search for problem resolution which are exacerbated by the noisy, casual syntax in forums. Motivated by Probst et al. (2007) , we extract attribute-value pairs. These units provide both robustness as well as more configurational context to the problem. Another observation from the segments marked was that many of them involved a user wanting to perform an action (I cannot hear notifications on bluetooth) or the problems caused by a user's action (working great before I updated). We capture them using actionattribute tuples (details in Section 3.1.1).", "cite_spans": [ { "start": 108, "end": 127, "text": "(Kelly et al., 2009", "ref_id": "BIBREF17" }, { "start": 611, "end": 631, "text": "Probst et al. (2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Representational Units", "sec_num": "2.1" }, { "text": "Thread initiators describe the same problem in different ways leading to multiple threads discussing the same problem. Ideally, we want the representation of the problem to be the same for all these threads to build robust statistics. Consider the following examples, sync server has failed, sync server failed, sync server had failed, sync server has been failing. While the phrasing is different, we see that their dependency parse trees ( Figure 2 ) show a common relation between the verb or action, fail, and the attribute sync server (the base form of the verbs are obtained from their corresponding lemmas with TreeTagger (Schmid, 1994) ). Motivated by this, we use dependency parse trees for extracting action-attribute tuples. ", "cite_spans": [ { "start": 629, "end": 643, "text": "(Schmid, 1994)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 442, "end": 450, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Representational Units", "sec_num": "2.1" }, { "text": "!\"#$%!&'(&'%)*!%+*,-&.%% !\"#$%!&'(&'%+*,-&.%% !\"#$%!&'(&'%)*.%+*,-&.%% !\"#$%!&'(&'%)*!%/&&#%+*,-,#0%% !\"#$%& !!& !\"#$%& '#(& !!& !\"#$%& !!& '#(& !\"#$%& !!& '#(& '#(&", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational Units", "sec_num": "2.1" }, { "text": "We now give a detailed description, showing how the three types of units can be robustly extracted and used from noisy online forums.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detailed System Description", "sec_num": "3" }, { "text": "In the offline phase we first extract candidate units that describe a problem (and its solution) from the forum threads. We then filter this description, using the thread itself, to retain the important units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Offline Component", "sec_num": "3.1" }, { "text": "Sentences are tagged and parsed using the Stanford dependency parser (de Marneffe et al., 2006) . The following units are then obtained from the first post of the discussion thread. Phrasal units: defined to be Noun Phrases satisfying the regular expression, (Adjective)* (Number/Noun/Proper Noun)+, (eg., interactive web sites, osx widgets, 2007 outlook calendar). These are extracted from the discussion thread along with their frequencies of occurrence. Attribute-Value pairs: The dependency relations amod (adjectival modifier of a noun phrase) and num (numeric modifier of a noun) in the parsed output are used for this purpose. In the case of amod, the attribute is the noun that the adjective modifies and its value is the adjective. For example, with amod(signal,strong), the attribute is signal and its value is strong. In the case of num, the attribute is the noun and its value is the numeric modifier. For example, with num(digits,10), the attribute is digits and its value is 10. As mentioned in Section 2.1, these pairs capture more context of the problem being discussed. Additional attributevalue pairs are extracted by expanding the attributes with the adjacent nouns and adjectives that occur with it. For the example in Figure 3 , the attribute signal is modified to cell phone signal and added to the list of attribute-value pairs along with their frequencies of occurrence. Action-Attribute tuples: The dependency relations used for these units are given in Table 3 with examples. Many of these units help describe the user's problem while others provide contextual information behind the problem being discussed. These units are described with 4-tuples (Arg 1verb-Arg 2 -Arg 3 ), three of which are the arguments of the verb or attributes of an action. The relations given in the first column of Table 3 form fillers for the attributes of the action. The example in Figure 3 gives the tuple, I-entered-dns-null, where, I is the subject, entered is the action performed, dns is the object. If the verb has a prt relation, the particle is appended to the verb. For example, turned has a prt relation in prt(turned,off), hence, the verb is now modified to turned off.", "cite_spans": [ { "start": 69, "end": 95, "text": "(de Marneffe et al., 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 1239, "end": 1247, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1479, "end": 1486, "text": "Table 3", "ref_id": null }, { "start": 1818, "end": 1825, "text": "Table 3", "ref_id": null }, { "start": 1888, "end": 1896, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Candidates for Problem Description", "sec_num": "3.1.1" }, { "text": "Since entered in this example takes only a subject and an object as arguments, the third argument is null. Consider another example, I removed the wep password in the router settings, the tuple is now Iremoved-password-in the settings. The last row in Table 3 gives an example of the usage of the xcomp relation. As done with Attribute-Value pairs, the attributes in these units are also expanded with the adjacent nouns and adjectives and added to the list of Action-Attribute tuples along with their frequencies of occurrence in the entire thread.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Candidates for Problem Description", "sec_num": "3.1.1" }, { "text": "Since the problem is defined by the thread initiator in the first post of the thread, units in the first post are scored and ranked based on tf-idf (Manning et al., 2008) . We treat each thread as a document and the top 50 highest scoring candidates form the problem description for the thread. Units are extracted from the rest of the thread in order to obtain frequency statistics for the units in the first post. Pronouns, prepositions and determiners are dropped from the units while obtaining the counts. In addition, verbs in the action-attribute tuples are converted to their base form using the lemma information obtained from TreeTagger (Schmid, 1994) , to obtain counts. This makes the scores robust to small variations in the units. Examples of extracted units are shown in Table 2. We see that errors in the parse (volumes was tagged as a verb) cause erroneous units (*I volumes up). For this reason, we use frequency statistics from the rest of the discussion thread, to determine if a unit is valid or not. The tfidf based scheme also removes commonly used phrases such as, please help me, thank you, etc.", "cite_spans": [ { "start": 148, "end": 170, "text": "(Manning et al., 2008)", "ref_id": "BIBREF21" }, { "start": 646, "end": 660, "text": "(Schmid, 1994)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Scoring and Filtering", "sec_num": "3.1.2" }, { "text": "The system searches for a set of initial documents based on the user's initial query. Next, the followup candidate units are selected (Section 3.2.1) from the units extracted in the offline phase for the !\"\"#$%#&#'\"%(#\"'$)\"*#+,-)#\"!\"'.\"$.%\"(,/#\",\")%&.$0\"+#11\"2(.$#\")30$,1\" retrieved documents and natural language interactions are further generated by filling the templates (Section 3.2.2) with the selected units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online Phase", "sec_num": "3.2" }, { "text": "!\"#$%& '($%& ')*& +',-.& /+01& !\"#$%& +#2& !)3& ')*& +/('& !!& !!& '($%&", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online Phase", "sec_num": "3.2" }, { "text": "Interactions should be selected to (i) understand the user's requirements better by making the query more specific and reduce the number of results returned by the search engine, and (ii) reduce the number of interactions required. We use information gain to find the best unit that reduces the search space, motivated by its near optimality (Golovin et al., 2010) . If S is the set of all retrieved documents, S 1 \u2286 S containing unit i and S 2 is a subset of S that do not contain unit i , the gain from branching (or interacting) on unit i is,", "cite_spans": [ { "start": 342, "end": 364, "text": "(Golovin et al., 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Selection of candidate units for Question Generation", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Gain(S, uniti) = E(S) \u2212 |S1| |S| E(S1) \u2212 |S2| |S| E(S2) (1) E(S) = k=1,...|S| \u2212p(doc k )log2p(doc k )", "eq_num": "(2)" } ], "section": "Selection of candidate units for Question Generation", "sec_num": "3.2.1" }, { "text": "Where, each document is assigned a probability based on its rank in the search results:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of candidate units for Question Generation", "sec_num": "3.2.1" }, { "text": "p(docj ) = 1 rank(doc j ) k=1,...|S| 1 rank(doc k ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of candidate units for Question Generation", "sec_num": "3.2.1" }, { "text": "The unit that gives the highest information gain forms the candidate for question generation. Information gain has been widely used in Decision Trees (Mitchell, 1997) , where the nodes represent attributes and the edges indicate values, and is known to result in short trees. In our case, the nodes represent the follow up questions and the edges indicate whether the user's answer is yes or", "cite_spans": [ { "start": 150, "end": 166, "text": "(Mitchell, 1997)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Selection of candidate units for Question Generation", "sec_num": "3.2.1" }, { "text": "Attributes(Arg1/Arg2/Arg3) Action(Verb) nsubj nsubj(entered,I) I (Arg1) entered dobj dobj(entered,dns) dns (Arg2) entered iobj iobj(give, address) address (Arg3) address pobj prep(connect,to), pobj(to,wifi) wifi(Arg2) connect prep (to,into, etc) prep in(removed,settings) in the settings (Arg3 if Arg2 not present, else Arg2) removed xcomp xcomp(prompt, connect), prep to(connect,wifi), password(Arg1), to wifi (Arg2) prompt to connect nsubj (prompt, password) Table 3 : Dependency relations used to extract Action-Attribute tuples no. The goal in decision trees is to quickly exhaust the space of examples with fewer steps, resulting in shorter trees. The goal in this paper is to traverse the space of results obtained with the initial query to reach the most relevant document with the fewest interactions. Since the dialog problem can be easily mapped to decision trees, the choice of information gain allows the user to arrive at the most relevant document with the smallest number of interactions in the online phase.", "cite_spans": [ { "start": 231, "end": 245, "text": "(to,into, etc)", "ref_id": null } ], "ref_spans": [ { "start": 288, "end": 341, "text": "(Arg3 if Arg2 not present, else Arg2) removed xcomp", "ref_id": "TABREF4" }, { "start": 463, "end": 470, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Example", "sec_num": null }, { "text": "Questions are generated based on the type, number and tense information present in the units. The list of templates used for question generation is given in Table 4 . Once a candidate unit is selected, a template is chosen based on its type. Phrasal units have a single template. If an attribute has two values with very similar information gains, the template for Attribute-Value pairs accommodates the different values. For example, if the pairs, Outlook:2003 and Outlook:2007 have very similar gains, the question would then be Is your outlook: Option 1 :2003 Option 2 :2007 ? and the user has the option to click on the one that is relevant to his query. For Action-Attribute tuples, the templates are chosen based on the the person, number and tense information from the verbs (Table 4) . null in the table (for example, null-send-emails-null) indicates that a particular argument does not exist or was not found and hence the argument will not be added to the appropriate template. Certain templates require converting the verb to a different form (e.g., VBD to VBN). This mapping is stored as a dictionary obtained by running the TreeTagger on the entire dataset and various forms are automatically obtained by linking them to the lemmas of the verbs (for example, give(VB/lemma) gave(VBD) given(VBN) gives(VBZ)).", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 782, "end": 791, "text": "(Table 4)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Question Generation", "sec_num": "3.2.2" }, { "text": "To evaluate our system, we built and simulated a contact center DSS for iPhone problem resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "We crawled threads created during the period 2007-2011 from the Apple Discussion Forum resulting in about 147, 000 discussion threads. In order to create a test data set, threads were clustered treating each discussion thread as a data point using a tf-idf representation. The thread nearest the centroid from the 60 largest clusters were marked as the 'most common' problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of Dataset", "sec_num": "4.1" }, { "text": "Underspecified Query 1: \"cannot sync calendar\" Forms 6 specific queries 1. because iphone disconnected 2. because the sync server failed to sync 3. because the sync session could not be started 4. because the phone freezes 5. error occurred while mingling data 6. error occurred while merging data To generate specific and under-specified queries on this data set, in our experiments, we use the first post as a proxy for the problem description. An annotator created one short query (underspecified) from the first post of each of the 60 selected threads. These queries were given to the Lemur search engine (Strohman et al., 2005) to retrieve the 50 most similar threads from an index built on the entire set of 147, 000 threads. He manually analyzed the first posts of the retrieved threads to create contexts, resulting in 217 specific queries. To understand this process we give an example from our data creation in Table 5 . Starting from an under-specified query cannot sync calendar, the annotator found 6 specific queries. Two other annotators, were given these specific queries along with the search engine's results from the corresponding under-specified query. They were asked to choose the most relevant results for the specific queries. The intersection of the choices of the annotators formed our 'gold standard'.", "cite_spans": [ { "start": 609, "end": 632, "text": "(Strohman et al., 2005)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 921, "end": 928, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Description of Dataset", "sec_num": "4.1" }, { "text": "We conducted two user studies to determine the (subjective) value of our problem representation, focusing on action-attribute tuples. In the first user study, 5 users were given the first post of 20 threads with three problem representations for the first post, (1) phrasal units only, (2) phrasal units and attribute-value (Att-Val) pairs and (3) phrasal units, attribute-value pairs and actionattribute (Act-Att) tuples. They were asked to indicate which representation best represented the problem. All users preferred the third representation on all the first posts. An example first post and units are in Table 2. In the second study, the same 5 users were asked to indicate how many units in Representation 3 were not relevant to the problem discussed in the first post, for a subset of 10 threads. We defined 'not relevant' as noisy components which do not aid in the problem representation e.g. oh boy and thanks! (see Table 2 ). All users marked 2 examples (sort and way) as not relevant, out of 110 units that the algorithm generated for these threads.", "cite_spans": [], "ref_spans": [ { "start": 610, "end": 618, "text": "Table 2.", "ref_id": "TABREF4" }, { "start": 927, "end": 934, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "User based analysis of the problem representation and unit extraction", "sec_num": "4.2" }, { "text": "These two user studies taken together show that the combined set of units, is able to capture the problem description well and that our algorithm is able to filter out noise in the thread to create a robust and useful representation of the problem. The results in Section 4.3 (Tables 6 and 7) , show the value of our problem representation, in a complete end-to-end system, with objective metrics.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 292, "text": "(Tables 6 and 7)", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "User based analysis of the problem representation and unit extraction", "sec_num": "4.2" }, { "text": "We evaluate a complete system with both user (or agent) and search engine in the loop. We focus on measuring the value of the interactions by an analysis of which results 'rise to the top'. The experiment was conducted as follows. Annotators were given a specific query and its underspecified query (as created in Section 4.1) along with the results obtained when the underspecified query was input to the search engine. They were presented with the T op = 1, 3 or 5 scoring follow up questions. E.g., for the underspecified query in Table 5 and specific query 2, the generated question was (see Table 4 ), Has the sync server failed to sync?. The user then selected the most appropriate follow up question, reducing the number of results. We then measured the relevance of the reduced result, with respect to the gold standard (see Section 4.1) for that specific query, using metrics commonly used in Information Retrieval -MRR, Mean Average Precision (MAP) and Success at rank N (Baeza- Yates et al., 1999) . We restrict N = 5 (small) since the rapid resolution time required of contact center agents does not allow them to look at many results.", "cite_spans": [ { "start": 989, "end": 1008, "text": "Yates et al., 1999)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 534, "end": 541, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 596, "end": 603, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Unit selection for interaction", "sec_num": "4.3" }, { "text": "In Figures 4, 5 and Table 6 , we compare our system against a baseline system, which is the set of results obtained with the underspecified query, and a system where 5 interaction units are selected at random from the initial search results. Note that, as the number of follow up questions presented increases, the scores will improve since it is more likely that the 'correct' choice is presented. However, there is a trade-off here since the agent has to again peruse more questions, which increases time spent, and so we limit this value to 5 as well. to give a substantial improvement in performance. E.g., one intelligently chosen interaction performs better than 5 randomly chosen ones. These results show the value of the units we select and the choice of information gain as a metric. To measure the importance of each unit type, we analyzed the selected follow up questions (T op = 5) for each underspecified query. Table 7 lists the fraction of queries whose origin was a specific unit type.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 15, "text": "Figures 4, 5", "ref_id": "FIGREF3" }, { "start": 20, "end": 27, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 925, "end": 932, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Unit selection for interaction", "sec_num": "4.3" }, { "text": "Preference Phrases 51% Attribute-Value Pairs 12% Action-Attribute Tuples 37% Table 7 : Fraction of follow up questions selected that originated from a specific unit type", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Unit's Type", "sec_num": null }, { "text": "Finally, an annotator was given 100 generated follow up questions from the previous experiment and asked to label them as understandable or not. The annotator marked 13% as not understandable. Examples were, does the phone connect, has the touchscreen stopped, does the message connect (which were due to errors in parsing) and do you want to leave the car (due to a filtering error).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Templates for Question Generation", "sec_num": "4.4" }, { "text": "Our work is related to three somewhat distinct areas of research, dialog systems, question answering (QA) systems and interactive search. Unlike most QA systems, we continue a sequence of interactions where the system and the user are active participants. The primary contribution of this work is a combined DSS, search, natural language dialog and query refinement system built automatically from semi-structured forum data. No prior work on interactive systems deals with problem resolution from large scale, noisy online forums.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Many speech dialog systems exist today for tasks including, obtaining train information (RAILTEL) (Bennacef et al., 1996) , airline information (Rudnicky et al., 2000) and weather information (Zue et al., 2000) . These systems perform simple database retrieval tasks, where, the keywords and their possible values are known apriori.", "cite_spans": [ { "start": 98, "end": 121, "text": "(Bennacef et al., 1996)", "ref_id": "BIBREF3" }, { "start": 144, "end": 167, "text": "(Rudnicky et al., 2000)", "ref_id": "BIBREF27" }, { "start": 192, "end": 210, "text": "(Zue et al., 2000)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In general document retrieval tasks, when a user's query is under-specified and a large number of documents are retrieved, interactive search engines have been designed to assist the user in narrowing down the search results (Bruza et al., 2000) . Much research has concentrated on query reformulation or query suggestions tasks. Suggestions are often limited to terms or phrases either extracted from query logs (Guo et al., 2008; Baeza-Yates et al., 2004) or from the documents obtained in the initial search results (Kelly et al., 2009) . Bhogal et al. (2007) require rich ontologies for query expansion, which may be difficult and expensive to obtain for new domains. Leung et al. (2008) identify related queries from the web snippets of search results. Cucerzan and White (2007) use users' post-query navigation patterns along with the query logs to provide query suggestions. Mei et al. (2008) rank query suggestions using the click-through (query-url) graph. Boldi et al. (2009) provide query suggestions based on short random walk on the query flow graph. The main drawback behind these approaches is the dependence on query logs and labeled data to train query selection classifiers. We show how certain units are robust representations of documents in the problem resolution domain which can automatically be extracted from semi-structured data. Feuer et al. (2007) use a proximity search-based system that suggests sub and super phrases. Cutting et al. (1993; Hearst and Pedersen (1996; Kelly et al. (2009) cluster retrieved documents and make suggestions based on the centroids of the clusters. Kraft and Zien (2004) and Bhatia et al. (2011) use n-grams extracted from the text corpus to suggest query refinement. Although these techniques do not rely on query logs for providing suggestions, the suggestions are limited to contiguous phrases. They also do not generate follow up questions, but instead provide a list of suggestions and require the user to select one among them or use them manually to reformulate the initial queries.", "cite_spans": [ { "start": 225, "end": 245, "text": "(Bruza et al., 2000)", "ref_id": "BIBREF7" }, { "start": 413, "end": 431, "text": "(Guo et al., 2008;", "ref_id": "BIBREF15" }, { "start": 432, "end": 457, "text": "Baeza-Yates et al., 2004)", "ref_id": "BIBREF2" }, { "start": 519, "end": 539, "text": "(Kelly et al., 2009)", "ref_id": "BIBREF17" }, { "start": 542, "end": 562, "text": "Bhogal et al. (2007)", "ref_id": "BIBREF5" }, { "start": 672, "end": 691, "text": "Leung et al. (2008)", "ref_id": "BIBREF20" }, { "start": 758, "end": 783, "text": "Cucerzan and White (2007)", "ref_id": "BIBREF8" }, { "start": 882, "end": 899, "text": "Mei et al. (2008)", "ref_id": "BIBREF22" }, { "start": 966, "end": 985, "text": "Boldi et al. (2009)", "ref_id": "BIBREF6" }, { "start": 1356, "end": 1375, "text": "Feuer et al. (2007)", "ref_id": "BIBREF13" }, { "start": 1449, "end": 1470, "text": "Cutting et al. (1993;", "ref_id": "BIBREF9" }, { "start": 1471, "end": 1497, "text": "Hearst and Pedersen (1996;", "ref_id": "BIBREF16" }, { "start": 1498, "end": 1517, "text": "Kelly et al. (2009)", "ref_id": "BIBREF17" }, { "start": 1607, "end": 1628, "text": "Kraft and Zien (2004)", "ref_id": "BIBREF19" }, { "start": 1633, "end": 1653, "text": "Bhatia et al. (2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Automatically framing natural language questions as follow up questions to the user is still a challenging task since, (1) Diriye et al. (2009) and Kelly et al. (2009) showed that interactive query expansion terms are poorly used, and tend to lack information meaningful to the user, thus emphasizing the need for larger context to best capture the actual query/problem intent (2) finding a few question/suggestions that would narrow the search results will lead to fewer interactions as opposed to displaying the single best result (3) particularly for non-technical users, interactions and clarifications need to be fluent enough for the user to understand and continue his interaction with the system (Alter, 1977) . In this paper, we show how to extract important representative contextual units (which do not necessarily contain contiguous words) and use these to generate contextual interactions. Sajjad et al. (2012) consider a data set where objects belong to a known category, with textual descriptions of objects and categories collected from human labelers, using which n-gram based attributes of objects are defined. Subsets of these attributes are filtered, again using labeled data. Kotov and Zhai (2010) frame questions with the help of handmade templates for the problem of factoid search from a subset of Wikipedia. However, they do not select queries with the goal of minimizing the number of interactions. To extend these approaches to problem-resolution finding, (as opposed to factoids or item descriptions) simple most common noun phrases (as used in Sajjad et al. (2012) and Kotov and Zhai (2010) ) are insufficient, since they do not capture the problem or intent of the user. As motivated in Section 1, this requires a better representation of candidate phrases. Our paper also suggests an approach that does not need any human labelled or annotated data. Suggestions are selected using units such that the problem intent is well captured and also ensure that fewer interactions take place between the user and the system. Follow-up questions are framed using templates designed for these units, allowing us to move beyond simple terms and phrases.", "cite_spans": [ { "start": 123, "end": 143, "text": "Diriye et al. (2009)", "ref_id": "BIBREF11" }, { "start": 148, "end": 167, "text": "Kelly et al. (2009)", "ref_id": "BIBREF17" }, { "start": 704, "end": 717, "text": "(Alter, 1977)", "ref_id": "BIBREF0" }, { "start": 903, "end": 923, "text": "Sajjad et al. (2012)", "ref_id": "BIBREF28" }, { "start": 1197, "end": 1218, "text": "Kotov and Zhai (2010)", "ref_id": "BIBREF18" }, { "start": 1573, "end": 1593, "text": "Sajjad et al. (2012)", "ref_id": "BIBREF28" }, { "start": 1598, "end": 1619, "text": "Kotov and Zhai (2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "This paper proposed an interactive system for natural language problem resolution in the absence of manually labelled logs or pre-determined dialog state sequences. As future work, we would like to use additional information such as, the trustworthiness of the posters, quality of solutions in the threads, etc., while scoring the documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://www.ihealthbeat.org/articles/2013/2/11/ibmoffering-two-new-decision-support-tools-based-onwatson.aspx", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://answers.yahoo.com/ 3 http://ubuntuforums.org/ 4 https://discussions.apple.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Why is man-computer interaction important for decision support systems?", "authors": [ { "first": "Steven", "middle": [], "last": "Alter", "suffix": "" } ], "year": 1977, "venue": "Interfaces", "volume": "7", "issue": "2", "pages": "109--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Alter. 1977. Why is man-computer interaction important for decision support systems? Interfaces, 7(2):109-115.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modern information retrieval", "authors": [ { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Berthier", "middle": [], "last": "Ribeiro-Neto", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval. ACM press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Query recommendation using query logs in search engines", "authors": [ { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Hurtado", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Mendoza", "suffix": "" } ], "year": 2004, "venue": "EDBT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Baeza-Yates, Carlos Hurtado, and Marcelo Mendoza. 2004. Query recommendation using query logs in search engines. In EDBT.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dialog in the RAILTEL telephone-based system", "authors": [ { "first": "S", "middle": [], "last": "Bennacef", "suffix": "" }, { "first": "L", "middle": [], "last": "Devillers", "suffix": "" }, { "first": "S", "middle": [], "last": "Rosset", "suffix": "" }, { "first": "L", "middle": [], "last": "Lamel", "suffix": "" } ], "year": 1996, "venue": "ICSLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bennacef, L. Devillers, S. Rosset, and L. Lamel. 1996. Dialog in the RAILTEL telephone-based sys- tem. In ICSLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Query suggestions in the absence of query logs", "authors": [ { "first": "Sumit", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "Debapriyo", "middle": [], "last": "Majumdar", "suffix": "" }, { "first": "Prasenjit", "middle": [], "last": "Mitra", "suffix": "" } ], "year": 2011, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumit Bhatia, Debapriyo Majumdar, and Prasenjit Mi- tra. 2011. Query suggestions in the absence of query logs. In SIGIR.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A review of ontology based query expansion. Inf. Process", "authors": [ { "first": "J", "middle": [], "last": "Bhogal", "suffix": "" }, { "first": "A", "middle": [], "last": "Macfarlane", "suffix": "" }, { "first": "P", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Bhogal, A. Macfarlane, and P. Smith. 2007. A re- view of ontology based query expansion. Inf. Pro- cess. Manage.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Query suggestions using query-flow graphs", "authors": [ { "first": "Paolo", "middle": [], "last": "Boldi", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Bonchi", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "Debora", "middle": [], "last": "Donato", "suffix": "" }, { "first": "Sebastiano", "middle": [], "last": "Vigna", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paolo Boldi, Francesco Bonchi, Carlos Castillo, Deb- ora Donato, and Sebastiano Vigna. 2009. Query suggestions using query-flow graphs. WSCD.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Interactive internet search: keyword, directory and query reformulation mechanisms compared", "authors": [ { "first": "Peter", "middle": [], "last": "Bruza", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mcarthur", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Dennis", "suffix": "" } ], "year": 2000, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Bruza, Robert McArthur, and Simon Dennis. 2000. Interactive internet search: keyword, di- rectory and query reformulation mechanisms com- pared. In SIGIR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Query suggestion based on user landing pages", "authors": [ { "first": "Silviu", "middle": [], "last": "Cucerzan", "suffix": "" }, { "first": "Ryen", "middle": [ "W" ], "last": "White", "suffix": "" } ], "year": 2007, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silviu Cucerzan and Ryen W. White. 2007. Query suggestion based on user landing pages. In SIGIR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Constant interaction-time scatter/gather browsing of very large document collections", "authors": [ { "first": "Douglass", "middle": [ "R" ], "last": "Cutting", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Karger", "suffix": "" }, { "first": "Jan", "middle": [ "O" ], "last": "Pedersen", "suffix": "" } ], "year": 1993, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglass R. Cutting, David R. Karger, and Jan O. Pedersen. 1993. Constant interaction-time scat- ter/gather browsing of very large document collec- tions. In SIGIR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. LREC.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A polyrepresentational approach to interactive query expansion", "authors": [ { "first": "Abdigani", "middle": [], "last": "Diriye", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Blandford", "suffix": "" }, { "first": "Anastasios", "middle": [], "last": "Tombros", "suffix": "" } ], "year": 2009, "venue": "ACM/IEEE-CS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdigani Diriye, Ann Blandford, and Anastasios Tombros. 2009. A polyrepresentational approach to interactive query expansion. In ACM/IEEE-CS.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Building Watson: An overview of the DeepQA project", "authors": [ { "first": "David", "middle": [], "last": "Ferrucci", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" }, { "first": "James", "middle": [], "last": "Fan", "suffix": "" }, { "first": "David", "middle": [], "last": "Gondek", "suffix": "" }, { "first": "Aditya", "middle": [ "A" ], "last": "Kalyanpur", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lally", "suffix": "" }, { "first": "William", "middle": [], "last": "Murdock", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" }, { "first": "John", "middle": [], "last": "Prager", "suffix": "" } ], "year": 2010, "venue": "AI Magazine", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI Magazine, 31(3).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evaluation of phrasal query suggestions", "authors": [ { "first": "Alan", "middle": [], "last": "Feuer", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Savev", "suffix": "" }, { "first": "Javed A", "middle": [], "last": "Aslam", "suffix": "" } ], "year": 2007, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Feuer, Stefan Savev, and Javed A Aslam. 2007. Evaluation of phrasal query suggestions. In CIKM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Near-optimal bayesian active learning with noisy observations", "authors": [ { "first": "Daniel", "middle": [], "last": "Golovin", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Debajyoti", "middle": [], "last": "Ray", "suffix": "" } ], "year": 2010, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Golovin, Andreas Krause, and Debajyoti Ray. 2010. Near-optimal bayesian active learning with noisy observations. In NIPS.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A unified and discriminative model for query refinement", "authors": [ { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Gu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2008, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiafeng Guo, Gu Xu, Hang Li, and Xueqi Cheng. 2008. A unified and discriminative model for query refine- ment. In SIGIR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reexamining the cluster hypothesis: scatter/gather on retrieval results", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "Jan", "middle": [ "O" ], "last": "Hearst", "suffix": "" }, { "first": "", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 1996, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A. Hearst and Jan O. Pedersen. 1996. Reex- amining the cluster hypothesis: scatter/gather on re- trieval results. In SIGIR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A comparison of query and term suggestion features for interactive searching", "authors": [ { "first": "Diane", "middle": [], "last": "Kelly", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Gyllstrom", "suffix": "" }, { "first": "Earl", "middle": [ "W" ], "last": "Bailey", "suffix": "" } ], "year": 2009, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diane Kelly, Karl Gyllstrom, and Earl W. Bailey. 2009. A comparison of query and term suggestion features for interactive searching. In SIGIR.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Towards natural question guided search", "authors": [ { "first": "Alexander", "middle": [], "last": "Kotov", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Kotov and ChengXiang Zhai. 2010. To- wards natural question guided search. In WWW.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Mining anchor text for query refinement", "authors": [ { "first": "Reiner", "middle": [], "last": "Kraft", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Zien", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reiner Kraft and Jason Zien. 2004. Mining anchor text for query refinement. In WWW.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Personalized concept-based clustering of search engine queries", "authors": [ { "first": "Kenneth Wai-Ting", "middle": [], "last": "Leung", "suffix": "" }, { "first": "Wilfred", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Dik Lun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "IEEE Trans. on Knowl. and Data Eng", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Wai-Ting Leung, Wilfred Ng, and Dik Lun Lee. 2008. Personalized concept-based clustering of search engine queries. IEEE Trans. on Knowl. and Data Eng.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Introduction to information retrieval", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to information retrieval. Cambridge University Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Query suggestion using hitting time", "authors": [ { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Dengyong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" } ], "year": 2008, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiaozhu Mei, Dengyong Zhou, and Kenneth Church. 2008. Query suggestion using hitting time. In CIKM.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Machine learning", "authors": [ { "first": "M", "middle": [], "last": "Tom", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom M Mitchell. 1997. Machine learning. Burr Ridge, IL: McGraw Hill.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Clinical decision-support systems", "authors": [ { "first": "Yuval", "middle": [], "last": "Mark A Musen", "suffix": "" }, { "first": "Edward", "middle": [ "H" ], "last": "Shahar", "suffix": "" }, { "first": "", "middle": [], "last": "Shortliffe", "suffix": "" } ], "year": 2006, "venue": "Biomedical informatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark A Musen, Yuval Shahar, and Edward H Short- liffe. 2006. Clinical decision-support systems. Biomedical informatics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Designing personalized intelligent financial decision support systems", "authors": [ { "first": "Ant\u00f3nio", "middle": [], "last": "Palma-Dos Reis", "suffix": "" }, { "first": "Fatemeh", "middle": [], "last": "Zahedi", "suffix": "" } ], "year": 1999, "venue": "Decision Support Systems", "volume": "26", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ant\u00f3nio Palma-dos Reis, Fatemeh Zahedi, et al. 1999. Designing personalized intelligent financial decision support systems. Decision Support Systems, 26(1).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Extracting and using attribute-value pairs from product descriptions on the web. From Web to Social Web: Discovering and Deploying User and Content Profiles", "authors": [ { "first": "Katharina", "middle": [], "last": "Probst", "suffix": "" }, { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" }, { "first": "Marko", "middle": [], "last": "Krema", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Fano", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katharina Probst, Rayid Ghani, Marko Krema, Andy Fano, and Yan Liu. 2007. Extracting and using attribute-value pairs from product descriptions on the web. From Web to Social Web: Discovering and Deploying User and Content Profiles.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Task and domain specific modelling in the carnegie mellon communicator system", "authors": [ { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" }, { "first": "Christina", "middle": [ "L" ], "last": "Bennett", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Ananlada", "middle": [], "last": "Chotimongkol", "suffix": "" }, { "first": "Kevin", "middle": [ "A" ], "last": "Lenzo", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Rita", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2000, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander I. Rudnicky, Christina L. Bennett, Alan W. Black, Ananlada Chotimongkol, Kevin A. Lenzo, Alice Oh, and Rita Singh. 2000. Task and domain specific modelling in the carnegie mellon communi- cator system. In INTERSPEECH.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Underspecified query refinement via natural language question generation", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Micheal", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2012, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Sajjad, Patrick Pantel, and Micheal Gamon. 2012. Underspecified query refinement via natural language question generation. In COLING.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probabilistic part-of-speech tagging using decision trees", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "NEMLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In NEMLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Indri: A language modelbased search engine for complex queries", "authors": [ { "first": "Trevor", "middle": [], "last": "Strohman", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Turtle", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2005, "venue": "ICIA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Strohman, Donald Metzler, Howard Turtle, and W Bruce Croft. 2005. Indri: A language model- based search engine for complex queries. In ICIA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Jupiter: A telephone-based conversational interface for weather information", "authors": [ { "first": "Stephanie", "middle": [], "last": "Victor Zue", "suffix": "" }, { "first": "James", "middle": [], "last": "Seneff", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Polifroni", "suffix": "" }, { "first": "Timothy", "middle": [ "J" ], "last": "Pao", "suffix": "" }, { "first": "Lee", "middle": [], "last": "Hazen", "suffix": "" }, { "first": "", "middle": [], "last": "Hetherington", "suffix": "" } ], "year": 2000, "venue": "IEEE Trans. on Speech and Audio Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Zue, Stephanie Seneff, James Glass, Joseph Po- lifroni, Christine Pao, Timothy J. Hazen, and Lee Hetherington. 2000. Jupiter: A telephone-based conversational interface for weather information. IEEE Trans. on Speech and Audio Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Figure 1: System Description", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Dependency Parse Trees for various forms of the same problem.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Dependency parse: I entered the dns because I do not have a strong cell phone signal.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Success at N.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "MAP at N.", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "text": "Relevant short segments for a forum post.", "html": null, "content": "", "type_str": "table", "num": null }, "TABREF3": { "text": "First PostI cannot hear the notifications on my bluetooth now. it's at normal volume when i send a message but if i receive an email or text the volume is very low yet I have all volumes up all the way. Is there a new bluetooth volume i have to turn up with ios -4? or is it that another update screwed with the bluetooth again. Was working just great before i updated ios -4. please help me.thanks! Phrases volume, bluetooth volume, bluetooth, notifications, update, ios, normal volume, work, text, way, email, message, new bluetooth volume, Phrases + volume, bluetooth volume, bluetooth, notifications, update, ios, normal volume, work, text, way, email, message, new bluetooth volume, Att-Val pairs ios 4 Phrases + volume, bluetooth volume, bluetooth, notifications, update, ios, normal volume, work, text, way, email, message, new bluetooth volume, Att-Val pairs + ios 4, low volume, I hear notifications on bluetooth, update screwed bluetooth, I send message, Act-Att tuples I receive email, it is at normal volume, working great before updated, I missed emails, *I volumes way.", "html": null, "content": "
", "type_str": "table", "num": null }, "TABREF4": { "text": "Problem representations for a forum post.", "html": null, "content": "
", "type_str": "table", "num": null }, "TABREF5": { "text": "Specific and under-specified queries", "html": null, "content": "
", "type_str": "table", "num": null }, "TABREF6": { "text": "Phrases Is your query related to [unit] ? eg: Is your query related to osx widgets ? Attribute-Value pairs (if single value) Is your [attribute] [value] ? eg: Is your wifi signal strong ? Attribute-Value pairs (if multiple values) Is your [attribute]: Option 1 : [value 1 ] ... Option n :[value n ] ? eg: Is your outlook calendar: Option 1 :2003 Option 2 :2007 ? Action-Attribute tuples (Verb/Action is VB: base form) Does the [ARG 1 (sg)] [VERB] the [ARG 2 ] [ARG 3 ] ? Do the [ARG 1 (pl)] [VERB] the[ARG 2 ] [ARG 3 ] ? ARG 1 is empty / ARG 1 is a pronoun Do you want to [VERB] the[ARG 2 ] [ARG 3 ] ? eg: Does the wifi network prompt the password ? from the tuple: wifi network-prompt-password-null Action-Attribute (Verb/Action is VBP: non-3rd person, singular, present) [ARG 1 ] [VERB] [ARG 2 ] [ARG 3 ] ? ARG 1 is empty / ARG 1 is a pronoun Do you want to [VERB] the [ARG 2 ] [ARG 3 ] ? eg: Do you want to send the emails ? from the tuple: null-send-emails-null Action-Attribute (Verb/Action is VBN: past participle) ARG 1 is empty / ARG 1 is a pronoun Have you [VERB] [ARG 2 ] [ARG 3 ] ? ARG 2 and ARG 3 are empty Has the [ARG 1 (sg)] been [VERB] ? ARG 2 and ARG 3 are empty Have the [ARG 1 (pl)] been [VERB] ? Has the [ARG 1 (sg)] [VERB] the [ARG 2 ] [ARG 3 ] ? Have the [ARG 1 (pl)] [VERB] the [ARG 2 ] [ARG 3 ] ? eg:Has the update caused the phone to crash ? from the tuple: update-caused-phone-to crash Action-Attribute (Verb/Action is VBZ: 3rd person, singular, present) Does the [ARG 1 ] [VERB V B ] the [ARG 2 ] [ARG 3 ] ? ARG 1 is empty Does the phone [VERB V B ] the [ARG 2 ] [ARG 3 ] ? eg: Does the iphone use idol support from the tuple: iphone-uses-idol support-null Action-Attribute (Verb/Action is VBD: past tense) Has the [ARG 1 (sg)] [VERB V BN ] [ARG 2 ] [ARG 3 ] ? ARG 1 is empty Have the [ARG 2 (pl)] [ARG 3 ] been [VERB V BN ] ? ARG 1 is empty Is the [ARG 2 (sg)] [ARG 3 ] [VERB V BN ] ? ARG 1 is a pronoun Have you [VERB V BN ] [ARG 2 ] [ARG 3 ] ? Have the [ARG 1 (pl)] [VERB V BN ] [ARG 2 ] [ARG 3 ] ? eg: Has the iphone found several networks ? from the tuple: iphone-found-several networks-null Action-Attribute (Verb/Action is VBG: gerund/present participle) Is the [ARG 1 (sg)] [VERB] the [ARG 2 ] [ARG 3 ] ? Are the [ARG 1 (pl)] [VERB] the [ARG 2 ] [ARG 3 ] ? ARG 1 is empty Is the phone [VERB] the [ARG 2 ] [ARG 3 ] ? eg: Is the site delivering the flash version ? from the tuple: site-delivering-flash version-null", "html": null, "content": "
Unit's TypeTemplate (with examples)
", "type_str": "table", "num": null }, "TABREF7": { "text": "Templates for the follow up Question Generation .", "html": null, "content": "", "type_str": "table", "num": null }, "TABREF8": { "text": "In terms of all three measures, our system is able", "html": null, "content": "
Unit's TypeMRR
Baseline0.3997
Random0.4021
Phrases (with Top=5)0.6548
Phrases, Pairs (with Top=5)0.6745
Phrases, Pairs, Tuples (with Top=5) 0.7362
", "type_str": "table", "num": null }, "TABREF9": { "text": "MRR for different unit types.", "html": null, "content": "", "type_str": "table", "num": null } } } }